Various tweaks, citation staging

This commit is contained in:
arcanedev 2023-01-31 02:12:54 +00:00
parent 102e5dc20e
commit 38febafe69
No known key found for this signature in database
GPG Key ID: 13BA4BD4C14170C0
1 changed files with 35 additions and 29 deletions

View File

@ -6,6 +6,8 @@
- [Identifiers](#identifiers)
- [Hardware Selection](#hardware-selection)
- [Operating System](#operating-system)
- [Desktop](#desktop)
- [Mobile](#mobile)
- [Disable Logging](#disable-logging)
- [Secure Deletion](#secure-deletion)
- [MAC Randomization](#mac-randomization)
@ -27,7 +29,6 @@
- [Encrypting Drives and Files](#encrypting-drives-and-files)
- [Offline Password Managers](#offline-password-managers)
- [Obscurity](#obscurity)
- [Justification](#justification)
- [Code Implementation](#code-implementation)
- [Stylometry](#stylometry)
- [Blending](#blending)
@ -141,11 +142,14 @@ Central processing units (CPU) have a narrowed list of options. For the vast maj
## Operating System
Researching the right operating system (OS) for your specific operation can be a monstrous task. If Operations Security (OPSEC) is of utmost importance, then operating systems that generate excess logs and call home with telemetry and error reporting should be ruled out.
For desktop, this process eliminates Windows, Macintosh, and ChromiumOS/CloudReady from the race. While there are significant attempts at undermining telemetry on the distributions, this requires a substantial amount of effort that is bound to corrupt processes and retain the bloat from disabled software.
### Desktop
For desktop, this process eliminates Windows, Macintosh, and ChromiumOS/CloudReady from the race. While there are significant attempts at undermining telemetry on these distributions, this requires a substantial amount of effort that is bound to corrupt processes and retain the bloat from disabled software.
>Note: Solutions with Windows 10 aren't necessarily the anti-thesis to anti-forensics. These systems have excessive bloat, however they can pursue the same aims. Windows provides many areas to hide files amongst the system. Windows systems can also be an overload to inexperienced investigators with the caches, shellbags, shortcut files, monolithic registry hives, and a myriad of ways to set persistence mechanisms. This could force investigators to expend more time in the investigation. The reason it is avoided in this book is due to the proprietary blobs, bloatware, legacy protocols (which will continue to render it vulnerable to exploitation), and excess telemetry. In good faith, one could not claim to provide secure cryptography on a system that was designed for the aims of counterinsurgency.
>Note: Solutions with Windows variants aren't necessarily the anti-thesis to anti-forensics. These systems have excessive bloat, however they can pursue the same aims. Windows provides many areas to hide files amongst the system. Windows systems can also be an overload to inexperienced investigators with the caches, shellbags, shortcut files, monolithic registry hives, and a myriad of ways to set persistence mechanisms. This could force investigators to expend more time in the investigation. The reason it is avoided in this book is due to the proprietary blobs, bloatware, legacy protocols (which will continue to render it vulnerable to exploitation), and excess telemetry. In good faith, one could not claim to provide secure cryptography on a system that was designed for the aims of counterinsurgency.
GNU/Linux is one of the few operating system baselines that will not phone home and create excess logs locally. Even after making such a decision, whether that be Linux, BSD, or Xen, there are hundreds of derivatives to sift through. At the time of writing, the only anti-forensic friendly distributions designed to reduce the creation and storage of artifacts are TAILS and Whonix. However, any OS lacking telemetry with properly implemented full-disk encryption (FDE) and physical security is sufficient for the job of anti-forensics. If more persistence is desired while keeping distribution size minimal, consider running hardened variants of the following distributions:
For dangerous operations, Linux, BSD, and Xen variants, along with a select number of mobile distributions are the only true solutions. There are hundreds of derivatives to sift through for Linux. Regarding Xen and BSD, one should consider QubesOS or HardenedBSD respectively.
At the time of writing, the only anti-forensic friendly distributions designed to reduce the creation and storage of artifacts are TAILS and Whonix. However, any OS lacking telemetry with properly implemented full-disk encryption (FDE) and physical security is sufficient for the job of anti-forensics. If more persistence is desired while keeping distribution size minimal, consider running hardened variants of the following distributions:
- Arch
- Void
- Gentoo
@ -153,13 +157,14 @@ GNU/Linux is one of the few operating system baselines that will not phone home
One more factor to consider for the OS selection is the service manager being used. There are plenty of security enthusiasts who justifiably denounce the use of the SystemD service manager (used to spawn processes like networking, scheduled tasks, logging, etc).[^8] There are a variety of service managers that have less bloat and a more simple codebase - OpenRC, runit, etc. The fact that most of these OSs are open-source results in the problem of funding. A side-project that has peaked a developer's interest often go long durations (if not permanently) without any efforts to maintain/patch. Some recommended OS alternatives without SystemD at the time of writing include Artix (Arch variant)[^9], Void Linux[^10], and Alpine Linux[^11].
>Note: Ideally, an operating system running a micro-kernel (minimal core) such as seL4 could be in the running. These alternatives are still too adolescent to advise with little community support.
>Note: Ideally, an operating system running a micro-kernel (minimal core) such as seL4 could be in the running. At the time of writing, these alternatives are still too adolescent to advise with little community support.
### Mobile
For mobile devices, options are extraordinarily limited. Phones are designed to constantly ping telecommunications infrastructure and receive incoming packets by design. The core purpose is to be reached. Google, Apple, and other players in the telecommunications industry have taken this to an intrusive extent. Android stock phones home an average of 90 times per hour. Apple accounts for at least 18 times per hour.[^12] Both operating systems do not operate in a manner that is conducive to privacy. It seems that the only remaining options are to disable all sync capabilities on iPhone, or flash an open-source operating system to an Android.
For Android, the best operating system to date is GrapheneOS.[^13] This operating system can only be flashed to Google Pixel variants. This is a security-centric OS that accounts for many hardening mechanisms from software to hardware. GrapheneOS encrypts the entire device using block-level encryption, unlike most Android versions which use file-level encryption. If physical forensics of the handset is an issue, GrapheneOS is the best solution.
GNU/Linux based phones, such as Pine64's Pine Phone[^14] or Purism's Librem 5,[^15] are now hitting the market. These devices are inherently insecure in their early conceptions. One could consider these devices private but not secure. If an injection could reach the device, then all privacy is lost.
Phones designed to run GNU/Linux, such as Pine64's Pine Phone[^14] or Purism's Librem 5,[^15] are on the market. These devices are inherently insecure in their early conceptions. One could consider these devices private but not secure. If an injection could reach the device, then all privacy is lost.
## Disable Logging
Disabling logs at the source is the best solution to ensure excess logs are not being stored. Daemons or processes can automate the process of log collection. This has its useful functions for both debugging and security (auditing), however it is detrimental to the idea of information retention. It is strongly advised to periodically shred the log files if not disabling the logging daemons entirely.
@ -269,7 +274,7 @@ Systems can be started in non-persistent sessions with the use of `grub-live` an
>Note: These packages are primarily available for Debian-based systems
## Physical Destruction
Physical destruction of critical operation data is advised. Institutional authorities such as the National Security Agency (NSA) and Department of Defense (DoD) see no value in the wiping of critical data. If they believe data is at risk or a device under classification is to be removed from a closed area, all media drives must be completely degaussed. The lesson to be learned here is that if institutional authorities do not trust wiping and overwriting methods, be cautious in your operational threat model. If your life depends on the media being sanitized, save yourself the stress and physically destroy it. If your operation would have adverse consequences if you are caught, there is no room for sentiment.
Physical destruction of critical operation data is advised. Institutional authorities such as the National Security Agency (NSA) and Department of Defense (DoD) see no value in the wiping of critical data. If they believe data is at risk or a device under classification is to be removed from a closed area, all media drives must be completely degaussed. The lesson to be learned here is that if authorities do not trust wiping and overwriting methods, be cautious in your operational threat model. If your life depends on the media being sanitized, save yourself the stress and physically destroy it. If your operation would have adverse consequences if you are caught, there is no room for sentiment.
Destroying HDDs:
- Open the drive (with a screwdriver, usually Torx T8)
@ -285,7 +290,7 @@ Destroying SSDs:
- Burn the remains
- Separate and scatter the debris[^34]
>Note: The DoD generally cites a drive wiping policy of 7 passes using random data. Each pass is performed on the entire drive. Other acceptable means of data removal include a single random pass (modern drives make it nearly impossible to recover data, even with a single overwrite), microwaving the platter (the platter should be removed from the enclosure before doing this), applying sand paper aggressively to the platter, heating the drive in an oven (500 degrees Fahrenheit for 15 minutes? 30 if you want to be extra paranoid, or just leave it in the oven until investigators arrive), or taking a powerful magnet (perhaps from a home/car stereo) to degauss the drive. The platter should be removed first in this method to maximize effectiveness.
>Note: The Department of Defense (DoD) generally cites a drive wiping policy of 7 passes using random data. Each pass is performed on the entire drive. Other acceptable means of data removal include a single random pass (modern drives make it nearly impossible to recover data, even with a single overwrite), microwaving the platter (the platter should be removed from the enclosure before doing this), applying sand paper aggressively to the platter, heating the drive in an oven (500 degrees Fahrenheit for 15 minutes? 30 if you want to be extra paranoid, or just leave it in the oven until investigators arrive), or taking a powerful magnet (perhaps from a home/car stereo) to degauss the drive. The platter should be removed first in this method to maximize effectiveness.
## Cryptography
@ -377,12 +382,11 @@ PIM is treated as a secret value that controls the number of iterations used by
>Note: Larger-value PIMs also increase the time complexity of attacks, at the expense of time taken to perform password hashing. Most cryptologists would argue that a PIMs should not be treated as a secret parameter (or at least, such secrecy should not be relied on). The user's own password should be the source of security. Password hashing, in general, is a mitigation for users with less-than-secure passwords. As a person who values security against the world's most powerful attackers, one should make a point to not rely on password hashing for security.
## Obscurity
### Justification
Security professionals will often preach that security through obscurity is an inadequate method of security and should never be a way of addressing your current threat model. The original basis is the distinction "security through obscurity" vs "security by design," often cited as "Kerkhoff's Principle," which concludes a secure cryptosystem should be secure even if everything about the system, except the key, is public knowledge. Kerckhoff's Principal is sometimes cited in terms of Shannon's Maxim: "One ought to design systems under the assumption that the enemy will immediately gain full familiarity with them," or more simply "The enemy knows the system." With the maxim in mind, "security though obscurity" is specifically a cryptographic principal which has been extended to include any system designed with security. It is not discouraged to use security through obscurity. However, it is discouraged to rely on security through obscurity instead of relying on security by design. Obscurity can be used as an additional layer, but security should be guaranteed by the design, with obscurity used only as a padding against unforeseen vulnerabilities.
A threat model with the application of anti-forensics should not adhere strictly to one distinction of security vs design. Cryptographic software can perform means of obscurity. For instance, Veracrypt produces cryptographically secured volumes that contain differential hidden volumes for plausible deniability. These hidden volumes can hinder the effectiveness of an amateur (and perhaps well-versed) investigator. We are not claiming the process to be systematically flawless, however security has never been fault-less. If you have applied some of the cryptographic advice heeded in the book like full-disk encryption (FDE), and the adversary has managed to gain unbridled, decrypted access to your computer regardless, it becomes self-evident that obscurity is friend when the design has been bypassed or simply failed.
Perhaps mechanisms for clandestine messaging are set in place, standing up your own instances or using decentralized services can reduce your attack surface. It is difficult to attack infrastructure that did not provide any indication of its existence. You added more architecture into the mix for this chatter, however the attack surface from using centralized servers is removed. Snowden also recommended using decentralized servers over TOR with strong cryptography.
Perhaps mechanisms for clandestine messaging are set in place, standing up your own instances or using decentralized services can reduce your attack surface. It is difficult to attack infrastructure that did not provide any indication of its existence. You added more architecture into the mix for this chatter, however the attack surface from using centralized servers is removed. Edward Snowden also recommended using decentralized servers over TOR with strong cryptography.
### Code Implementation
Code is a great complement to cryptographic ciphers. It has an incredibly easy implementation, and its application can be as simple or complex as desired. Using the principle of randomness, you and your affiliates could generate a word list to send out messages in a similar way that cryptocurrency wallets generate word phrase seeds. Anyone in the conversation would be given the word list and their correlated meanings (i.e. snow = money, owl = printer). Think of this method as speaking cryptically without a real cryptographic implementation. For conversations over-the-air, phrases and words can be reused; however, reuse of codes will give away more and more of the true message (under the assumption that your messages are decrypted by unauthorized parties). Once a certain amount of messages have been sent using the code for messages, it is advised to have each of your affiliates burn the page correlating the words and code. Frequency analysis is a cryptographic code-breaking technique for deciphering messages that could make short work of finding the hidden meanings. The technique is exactly how it sounds - praying upon reused messages to determine the meaning of words and phrases.
@ -436,18 +440,18 @@ Depending on your threat model, not all operations can be conducted from a coffe
While some of these proposed methods may be unconventional, these are unconventional times. Mechanisms can be put in place to ensure that your systems are sent shutdown signals that will lock them behind disk encryption. Shutdown signals are the most common, however we are not limited to the commands we issue. The use of radio transmitters to issue shutdowns have some level of intricacy that surpasses skills of the novice user.
### Dead Man's Switch
A Dead Man's switch is a mechanism that automatically triggers a specific action (such as shutting down a system or wiping data) if a certain condition is not met (such as the user not interacting with the system within a certain period of time). In the context of protecting journalists, a Dead Man's switch can be used to ensure that sensitive information is not compromised if a journalist's device is seized or if they are under duress.
A dead man's switch is a mechanism that automatically triggers a specific action (such as shutting down a system or wiping data) if a certain condition is not met (such as the user not interacting with the system within a certain period of time). In the context of protecting journalists, a dead man's switch can be used to ensure that sensitive information is not compromised if a journalist's device is seized or if they are under duress.
For example, a journalist could configure a dead man's switch to wipe the memory of their device if it has not been used for a certain period of time, or if a specific button is not pressed at regular intervals. This would ensure that any sensitive information that is stored on the device is not accessible to unauthorized parties.
There are various ways to implement a dead man's switch, such as using USB devices, system events, or panic buttons. A physical wired dead man's switch reduces attack surface and intricacy, however remote switches can also be used to propagate a panic signal to all nodes on a network. This can be useful in situations where multiple journalists are working together and need to quickly destroy sensitive information if their operation is compromised.
Implementing a panic signal to invoke a dead man's switch can involve several steps, depending on the specific requirements and the systems involved. Here is a general overview of the process, with some references that provide more detailed information:
1. Define the panic signal: The first step is to define the panic signal that will trigger the Dead Man's switch. This can be a button, a keyboard shortcut, a voice command, or any other type of signal that can be captured by the system.
2. Capture the panic signal: The next step is to capture the panic signal and convert it into a system event that can be handled by the Dead Man's switch. This can be done using various methods such as using keyboard hooks, USB device monitoring, or voice recognition.
3. Create a script or program to handle the panic signal: Once the panic signal is captured, you need to create a script or program that can handle the panic signal and invoke the Dead Man's switch. This script or program should be able to run on the target system and be able to interact with the system's resources.
4. Configure the Dead Man's switch: After the panic signal is captured and handled, you need to configure the Dead Man's switch to respond to the panic signal. This can involve defining the actions that should be taken when the panic signal is received, such as shutting down the system, wiping memory, encrypting data, or sending an alert.
5. Test the Dead Man's switch: Before deploying the Dead Man's switch, you should test it to ensure that it works as expected and that it does not cause any unintended consequences. You can test the Dead Man's switch by simulating a panic signal and observing the system's response.
1. Define the panic signal: The first step is to define the panic signal that will trigger the dead man's switch. This can be a button, a keyboard shortcut, a voice command, or any other type of signal that can be captured by the system.
2. Capture the panic signal: The next step is to capture the panic signal and convert it into a system event that can be handled by the dead man's switch. This can be done using various methods such as using keyboard hooks, USB device monitoring, or voice recognition.
3. Create a script or program to handle the panic signal: Once the panic signal is captured, you need to create a script or program that can handle the panic signal and invoke the dead man's switch. This script or program should be able to run on the target system and be able to interact with the system's resources.
4. Configure the dead man's switch: After the panic signal is captured and handled, you need to configure the dead man's switch to respond to the panic signal. This can involve defining the actions that should be taken when the panic signal is received, such as shutting down the system, wiping memory, encrypting data, or sending an alert.
5. Test the dead man's switch: Before deploying the dead man's switch, you should test it to ensure that it works as expected and that it does not cause any unintended consequences. You can test the dead man's switch by simulating a panic signal and observing the system's response.
After the dead man's switch, aka killswitch, is configured, we can move on to the commands to issue. If we wanted to securely wipe the random access memory before shutting down, we could issue the `sdmem -v` command to verbosely clean the RAM following activation. Any form of shell command that is compatible with the particular GNU/Linux system can be ran based on a specified system behavior. See resources at the end of this section [^41], [^42], and [^43] for USB dead man's switch. In a nutshell, these tools are configured to watch system USB events. When a change occurs, the switch commands are invoked. Panic buttons are another form of a killswitch that remain active on your display and are ready to invoke at any moment. (Centry.py[^44] is a good example of a panic button).
@ -458,19 +462,19 @@ Despite what triggers the dead man's switch, if the operation falls under a life
## Canary in the Coalmine
The term "canary" originates from the practice of coal miners in the 19th century who would take canaries into the mines with them. Canaries are particularly sensitive to toxic gases, such as carbon monoxide, that might be present in the mines. If the canary stopped singing or died, the miners would know that the air quality was dangerous and would evacuate the mine.
In a similar manner, the concept of a "canary" in modern computing and information security refers to a warning mechanism that can detect unauthorized access or tampering of systems, data, or information. The idea is that a canary will give a warning sign if something is wrong, just as the canary in the mine would give a warning sign of toxic gas.
In a similar manner, the concept of a "canary", otherwise known as a "canary token", in modern computing and information security refers to a warning mechanism that can detect unauthorized access or tampering of systems, data, or information. The idea is that a canary will give a warning sign if something is wrong, just as the canary in the mine would give a warning sign of toxic gas.
Canaries have been used in a variety of contexts in information security:
1. Legal Canaries: Legal canaries are statements made by a company or organization that they have not received any legal orders, such as a subpoena, to disclose information about their users or activities.
2. Service Canaries: Service canaries are statements made by a company or organization indicating the status of their services or systems. They can be used to detect unauthorized access or tampering, as well as to provide real-time information about the availability of services.
3. Technical Canaries: Technical canaries are systems or tools used to detect unauthorized access or tampering of a network, computer system, or data. Examples include intrusion detection systems, honeypots, and honey tokens.
4. Cryptographic Canaries: Cryptographic canaries are digital signatures that are used to verify the authenticity and integrity of data or information. They can be used to detect unauthorized modification of information, such as in the case of a [poisoned-document](#document-poisoning).
4. Cryptographic Canaries: Cryptographic canaries are digital signatures that are used to verify the authenticity and integrity of data or information. They can be used to detect unauthorized modification of information, such as in the case of a [poisoned document](#document-poisoning).
5. Media Canaries: Media canaries are statements made by journalists or media organizations indicating the status of their media operations. They can be used to detect censorship, tampering, or other attempts to control the flow of information.
### Canary Statement
One way a journalist could use a canary is by publishing a "canary statement" on their website or social media accounts. This statement would contain information that would be unlikely to change, such as the journalist's phone number or a specific phrase that they use frequently. If the journalist is later arrested or otherwise prevented from publishing, they can have a trusted contact check to see if the canary statement is still present. If it is not, it would indicate that the journalist's website or social media accounts have been compromised, and that any information published on them should not be trusted.
There can be canaries that are cryptographically signed simply stating that no legal subpoenas have been issued. More advanced uses, such as Kicksecure's canary, can include raw query output displaying the current block of a public ledger of say Bitcoin, along with performing curl requests to determine recently posted articles from various news organizations. These canaries are often cryptographically signed to ensure that they have not been tampered with. See the following example:
There can be canaries that are cryptographically signed simply stating that no legal subpoenas have been issued. More advanced uses, such as Kicksecure's canary[^] or the QubesOS canary template[^], can include raw query output displaying the current block of a public ledger of say Bitcoin, along with performing curl requests to determine recently posted articles from various news organizations. These canaries are often cryptographically signed to ensure that they have not been tampered with. See the following example:
```
-----BEGIN PGP SIGNED MESSAGE-----
@ -502,10 +506,6 @@ ZKgeW/TZy6xx/KQkKYLBCdu6oCzMsBH857d3P5lO+T1MJuqRe8RFDRvqAg2ZE4VT
-----END PGP SIGNATURE-----
```
See the following examples of a public-facing canary:
Qubes Canary Template: https://github.com/QubesOS/qubes-secpack/blob/master/canaries/canary-template.txt
Kicksecure Canary: https://download.whonix.org/developer-meta-files/canary/canary.txt
### Cryptographic Canary with an IDS
There are many setups one could configure to use with an Intrusion Detection System (IDS). I'll provide one example with the use of an open-source tool known as Tripwire.
@ -579,13 +579,17 @@ While jamming isn't the best route for sniffing/snooping, the creation of excess
## EMF Shielding
Electro-magnetic frequency (EMF) shielding, otherwise known as a Faraday cage, is essential to maintaining privacy. Certain fabrics, paints, and foam with the proper alloys can prevent the infiltration and exfiltration of device traffic. If you're on a tight budget, purchasing the material from reputable vendors and making a DIY project out of it may be the best option. However, if you mess up the material with stitching or have any loose points where traffic can travel, it could end up being more costly than purchasing a pre-made Faraday bag. Try to store the Faraday caged items next to a ground. Electro-magnetic energy wants somewhere to go; it looks for a path. When the radio waves contact the structure, it is best to provide them an easy path that leads them away from the shielded device.
Electro-magnetic frequency (EMF) shielding, often implemented via a Faraday cage, works by enclosing an object or area in a conductive material, such as metal mesh or conductive paint, in order to block the passage of electromagnetic fields. The principle behind this is based on the fact that electromagnetic fields create an electric current in a conductor, which in turn creates a magnetic field that opposes the original field, effectively canceling it out.
If you're on a tight budget, purchasing the material from reputable vendors and making a DIY project out of it may be the best option. DIY Faraday cages can be made using a variety of materials, such as metal mesh, conductive paint, or specialized fabrics, but it is important to ensure that the cage is properly constructed and sealed in order to effectively block electromagnetic fields. It is also important to note that a Faraday cage must be grounded in order to function properly, as this provides a path for the electromagnetic energy to travel and be dissipated. In layman's terms, electromagnetic energy wants somewhere to go; it looks for a path. When the radio waves contact the structure, it is best to provide them an easy path that leads them away from the shielded device.
Pre-made Faraday bags are also available for purchase, but it's important to ensure that you are buying from a reputable vendor, as cheaper options may not provide the same level of shielding. Vendors often recommend surrounding an enclosure multiple times with repeated testing to ensure that the device is not able to receive various signals.
If the operation is mobile (I suspect it is if you cannot remove [radio transmitters](#radio-transmitters)), best practice is to store each item in its own Faraday enclosure and then store them inside a larger shielded enclosure. When you add or transfer items, the devices don't leak signal when the outer enclosure is opened. Think two is one, and one is none.
## Noise
Generating excess noise through logging or traffic can be an excellent method to throw investigators for a whirl. Anyone who has worked with security logging mechanisms for system auditing can attest that noise is the enemy of understanding. Traffic in mass can be hard to piece together, especially if it's not all being generated by you. For the natural sadists who want to more or less troll, consider hosting services such as a TOR node. Instead of trying to find pertinent clues in a small pond, investigators are trying to search a great lake, or perhaps the rivers of Nanthala National Forest. To couple the size, the clues they find may even lead them down false bends. So long as the data is not revealing information relevant to your operation(s), this will stand to make the water a little more murky.
If the operation is mobile (I suspect it would be if you cannot remove radio transmitters), best practice is to store each item in its own faraday bag and then store them inside a larger shielded bag. When you add or transfer items, the devices don't leak signal when the outer bag is opened. Think two is one, one is none.
## Optimization
Ultimately, you may find that many of these precautions are far out of your scope or threat model. You may find them to be immensely inconvenient.
@ -727,9 +731,9 @@ If a live USB with minimal processing power is not your niche, consider running
If a mobile device is deemed a necessity, leverage GrapheneOS on a Google Pixel. Encrypt all communications through trusted services or peer-to-peer (P2P) applications like Briar.[^67] Route all device traffic through TOR with the use of Orbot. Keep the cameras blacked out with electrical or gorilla tape. The concept of treating all signals as hostile should be emphasized here as the hardware wireless chipset cannot be de-soldered. Sensors and microphones can successfully be disabled, but the trend with smaller devices is that they run as a System on a Chip (SoC). In short, multiple functions necessary for the system to work are tied together in a single chip. Even if you managed not to fry the device from the de-soldering process, you would have gutted the core mechanisms of the system, resulting in the newfound possession of a paperweight.
### Market Vendor
Let's assume the vendor is selling some sort of vice found on the DEA's list of schedule 1 narcotics. Fortunately in this use-case, unlike that of the anonymous activist (or the journalist in some cases), OPSEC is welcomed with open arms. In fact, vendors are even rated with their stealth (both from shipping and processing) as one of the highest criteria in consideration, along with the markets being TOR friendly, leveraging PGP, and ensuring full functionality without JavaScript. Given the ongoing nature of these operations, and that they are tailored towards privacy and security, a more persistent system will likely be the best fit.
Let's assume the vendor is selling some sort of vice found on the DEA's list of schedule 1 narcotics. Fortunately in this use-case, unlike that of the anonymous activist (or the journalist in some cases), OPSEC is welcomed with open arms. In fact, vendors are even rated for their stealth (both from shipping and processing) as one of the highest criteria in consideration, along with the markets being TOR friendly, leveraging PGP, and ensuring full functionality without JavaScript. Given the ongoing nature of these operations, and that they are tailored towards privacy and security, a more persistent system will likely be the best fit.
The same recommendation for the journalist with a persistent setup using VMs for isolated processes on a hardened hypervisor is ideal. A completely amnesiac system is less necessary when you are not forced to interact with hostile sites that can arbitrarily run code via the use of JavaScript. While I would give a nod to those that take such precaution and exist solely in volatile memory, it is likely unnecessary and more of a hassle than the degraded performance is worth.
The same recommendation for the journalist with a persistent setup using VMs for isolated processes on a hardened hypervisor, such as QubesOS or PlagueOS, is ideal. A completely amnesiac system is less necessary when you are not forced to interact with hostile sites that can arbitrarily run code via the use of JavaScript. While I would give a nod to those that take such precaution and exist solely in volatile memory, it is likely unnecessary and more of a hassle than the degraded performance is worth.
## Conclusion
As stated earlier, relevancy in the tech industry is difficult to maintain in perpetuity. The proposed concepts applied with adequate discipline and mapping stand to render investigations ineffective at peering into operations. Most mistakes take place in the beginning and come back later to haunt an operation. The success stories are never highlighted. For instance, there are plenty of vendors across marketplaces that have gone under the radar for years. OPSEC properly exercised would not leave a trail for the intelligence community; thus obscure and cryptographic implementations like steganography or FDE would not have to be relied on. I hope to learn that some of this material aids dissidents and journalists to combat regimes rooted in authoritarianism, coupled with privacy-minded individuals who have the desire to be left alone. Freedom and privacy have never been permitted by the state, nor are they achieved through legislature, protests, petitions; they are reclaimed by blatant non-compliance, loopholes, and violence. Every man possesses the right of revolution, and every revolution is rooted in treason, non-conformity, and ultimately to escape from subservience. In a world where they proclaim that you should have nothing to hide, respond with "I have nothing to show."
@ -877,7 +881,7 @@ Donations to support projects under https://git.arrr.cloud/WhichDoc are welcome
[^19]: OSI Model - https://en.wikipedia.org/wiki/OSI_model
[^20]: ProtonVPN threat model - https://proton.me/blog/threat-model/
[^21]: Whonix VPN leakage - https://www.whonix.org/wiki/Tunnels/Connecting_to_Tor_before_a_VPN
[^22]: Tails VPN article - https://gitlab.tails.boum.org/tails/blueprints/-/wikis/vpn_support
[^22]: TAILS VPN article - https://gitlab.tails.boum.org/tails/blueprints/-/wikis/vpn_support
[^23]: I2P - https://geti2p.net/en/
[^24]: URLScan of anom[.]io - https://urlscan.io/result/f7b4c5ae-3864-4b3f-be0e-ad10e39276bc/#summary
[^25]: Orbot - https://guardianproject.info/apps/org.torproject.android/
@ -901,6 +905,8 @@ Donations to support projects under https://git.arrr.cloud/WhichDoc are welcome
[^43]: Silk Guardian - https://github.com/NateBrune/silk-guardian
[^44]: Centry Panic Button - https://github.com/AnonymousPlanet/Centry
[^45]: USBCTL - https://github.com/anthraxx/usbctl
https://download.whonix.org/developer-meta-files/canary/canary.txt
https://github.com/QubesOS/qubes-secpack/blob/master/canaries/canary-template.txt
[^46]: Elcomsoft Forensics - https://blog.elcomsoft.com/2020/03/breaking-veracrypt-containers/
[^47]: Jumping Airgaps - https://arxiv.org/pdf/2012.06884.pdf
[^48]: Geofence Requests - https://assets.documentcloud.org/documents/6747427/2.pdf
@ -914,7 +920,7 @@ Donations to support projects under https://git.arrr.cloud/WhichDoc are welcome
[^56]: DISA STIGs - https://public.cyber.mil/stigs
[^57]: KSPP - https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
[^58]: Whonix Host - https://www.whonix.org/wiki/Whonix-Host
[^59]: PlagueOS- https://0xacab.org/whichdoc/plagueos
[^59]: PlagueOS - https://0xacab.org/whichdoc/plagueos
[^60]: BubbleWrap Sandbox - https://github.com/containers/bubblewrap
[^61]: SalamanderSecurity's PARSEC repository - https://codeberg.org/SalamanderSecurity/PARSEC
[^62]: Linux Hardening - https://madaidans-insecurities.github.io/guides/linux-hardening.html