Friday, June 7, 2013

a small trick: automatically sign in SRUN3000 with OpenWRT

I. Background

Tianjin University has just implemented SRUN3000 authentication program, which restricts internet access without signed in.

It is very inconvenient to sign in every single time. So i decided to do something.

In this case, I have a router with OpenWRT running.

(btw: most tp-link wireless router could manage to update to openwrt firmware if you are willing to try this out)


II. Solution

OpenWRT provides hotplug2 feature to trigger scripts while interface state changed.

Save this script to /etc/hotplug.d/iface/30-srun:

#!/bin/sh
[ "$ACTION" = ifup ] || exit 0
[ "$INTERFACE" = wan ] || exit 0

# delay for 2 second
sleep 2

# nc to create a http post request
nc g.tju.edu.cn 80 < /tmp/post_login

And chmod it with x access.

Next step, save this datadump to /tmp/post_login:

GET /cgi-bin/do_login HTTP/1.1
Host: g.tju.edu.cn
Content-Type: application/x-www-form-urlencoded
Content-Length: 77

username=ID&password=PASSWORD&drop=0&type=1&n=100&force=true

replace ID with your account, PASSWORD with Substring(8, 16) of the md5 hash of your password. i.e. 123456 is 49ba59abbe56e057f. 

And 77 should be the total length of these parameters (username=ID&password=PASSWORD&drop=0&type=1&n=100&force=true).


--- EOF --

Thursday, June 6, 2013

a simple & stupid probatilistic corelation analysis method

I. Problem definition

assume you are in charge of a private tracker, recommendation could come in handy sometimes especially when visitor wanted to try out something new.

so i am going to work out a way to achieve this.


II. Theory

Consider resource A and B. N(A) represents number of users downloaded resource A.
In that case, presuming we have N(A ∪ B) users, P(A) = N(A) /  N(A ∪ B).
Similarly, P(B) = N(B) / N(A ∪ B).

Consider we wanted to present a list of recommendation based on resource A, we have two obvious way to do this:

a) For every other resource B, use P(B | A) as rank, with P(B | A) = P(AB) / P(A). The result is N(A ∩ B) / N(A).

b) For every other resource B, use P(AB) as rank. The result is N(A ∩ B) / N(A ∪ B).

First idea considers how much percentage of users downloaded B in users that downloaded A. 

Second idea considers how much percentage of users downloaded both A and B in users that downloaded A or B.

These are NOT THE SAME.

Consider N(A) = 50, N(B) = 40, N(A ∪ B) = 80, N(A ∩ B) = 10.
N(C) = 200, N(A ∪ C) = 210, N(A ∩ C) = 30.

Do the math, idea a) will get the following rank:
B: 0.20   C: 0.60

while idea b):
B: 0.13  C: 0.09

The difference is obvious.

In fact, resource C is a hot resource which almost every user downloaded it.
So C has a high rank in idea a). But in idea b), because it also considers how many users downloaded A in users that downloaded C, the ratio is reasonable.

After a series of tests, it turns out that some hot resource often got a high rank corelated to whatever resource in the first method, while the second method performs well because it considers a two-way linkage.


III. Coding phase

I wrote a tiny simple program to get this job done.

You can download it here: 


The idea is for every resource pair <A, B>, calculate (N(A ∩ B) / N(A ∪ B) * 100) as the corelation ratio of <A, B> as well as <B, A>.

When choosing recommendation list, sort the resource list by ratio, and pick the top N items.


IV. Next step

This is just a tiny experiment done all by myself because I am just curious about how well / fast these methods could perform, and what problem could i encounter next.

Now the time complexity of that algorithm is O( (R^2) * N ), R represents number of resources, N represents average number of users that downloaded a resource.

It takes a lot of time to finish the calculation.

Apparently there is much more better ways to improve this.

access the Internet through blocks in cernet

I. What is the problem

in this case, i am going to solve two problems at one time:

a) as far i as i know, overseas access via CERNET maintains a bandwidth at the average of 8kb/s, which drive you crazy

b) china GFW blocks almost every popular website around the world


II. The idea

it turns out that:

a) access overseas resource via a proxy which set up in nearest city is usually faster

b) GFW will not block secure connections

so suppose i am in Tianjin, i am gonna use two servers: A set up in Beijing and B set up in US, and a maintained website list to solve the problem.

when:

a) accessing mainland websites, just go directly

b) accessing overseas non-blocked websites, use A as proxy which makes it faster

c) accessing overseas blocked websites, first access B via proxy A, and access the target website via B, which get through the blocks with a impressive speed.


III. Tools

use proxifier (http://www.proxifier.com/) to redirect connections set up by localhost.

use plink (http://the.earth.li/~sgtatham/putty/latest/x86/plink.exe) to set up secure tunnel from localhost to server A.

use autossh (http://www.harding.motd.ca/autossh/) to maintain a secure tunnel from server A to server B, otherwise it will be unable for us to get through blocks.


IV. How to

a) configure ssh-key for A and B in order to give A free access to B without typing password

b) add the following command line to daemon on server A:
autossh -f -M 5678 -CfNg -D portAB serverB

c) set up a plink process on localhost:
plink.exe serverA -N -ssh -2 -D portA

d) create proxy servers in proxifier: 127.0.0.1:portA and 127.0.0.1:portAB

e) combine a proxy china using these two proxies named Tunnel-US

f) set up rules:
in this case i have to redirect all connections to US & CA to 127.0.0.1:portA
and all blocked connections to Chain Tunnel-US

g) enjoy twitter


V. Next step

so far i have to build these rules manually for there is no such list that could provide which website is overseas or which websites is blocked.

there is a page provides monitored domains that are blocked: https://en.greatfire.org/search/domains

that makes it possible to fill rules automatically.

--- EOF---

Wednesday, June 5, 2013

generating C99 lexer & parser using antlr v4

This article describes how to generate a parser that compatible with native C99 standard by using antlr v4.


I. Why antlr v4

antlr v4 implemented a new LL* algorithm for parser which provides more feature, i.e. auto generating parse tree. Besides, it provides a set of test kits which could used to make sure lexer and parser works safe and sound.

So we are doing this using antlr v4 instead of antlr v3: http://www.antlr.org/download.html

II. Combined rules

antlr's methodology depends on two parts of rules: lexer and parser.
lexer takes a raw input stream (the source code) and outputs a token stream.
parser takes token stream and outputs a parse tree as the final result (so far).
we combined these sets of rules into one single grammer file with extension .g4.


III. Grammer file

i figured out that antlr v3 has officially provided a set of grammer rules including C, Java and CSharp. but unfortunately these rules won't work in antlr v4 because several options have been obsoleted and several rules are not compatible with the new algorithm implemented in antlr v4. 

In purpose of get it working, i made several modifications to the rules and get this: https://www.dropbox.com/s/zxl39ft2xyh1q5o/C.g4


IV. Generating java classes

use the following command line to generate java classes based on the grammer rule:
java org.antlr.v4.Tool C.g4

after this, if succeed, at least these two files will be generated: CLexer.java and CParser.java.



V. Compile and test it

use the following command line:
javac.exe C*.java
java org.antlr.v4.runtime.misc.TestRig C translation_unit -gui -trace -diagnostics < main.c 1> test.log 2>&1

if things don't go wrong, you will see a tree graph after a short period, which means it works so far.


VI. Integrating to your own java project

first make sure your project have antlr-4.0-complete.jar in class path, then you can add these classes auto generated into your project.

now you can create your own code to sort things out:

ANTLRInputStream input = new ANTLRInputStream(inputStream);
CLexer lexer = new CLexer(input);
CommonTokenStream tokens = new CommonTokenStream(lexer);

CParser parser = new CParser(tokens);
parser.setBuildParseTree(true);
ParserRuleContext tree = parser.translation_unit();

we have the parse tree now

you can use org.antlr.v4.runtime.tree.gui.TreeViewer to display the tree on your JFrame.

--- EOF ---