Hacked By NeT.Defacer

by w4l3XzY3

Currently the general rule when using SSL is that you will need one IP for each hostname you want to secure. This will all change once TLS2.0 is widely adopted. For the time being, if you are lucky enough to only want to be securing multiple subdomains off of the same domain with a wildcard SSL cert the keep reading below.

1. Ensure that your apache config includes:

NameVirtualHost *:443

2. Your vhosts:

<VirtualHost *:443>
ServerName subdomain1.example.com
……
SSLEngine on
SSLProtocol all -SSLv2
SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
SSLCertificateFile /path/to/your/ssl.crt
SSLCertificateKeyFile /path/to/your/ssl.key
……
</VirtualHost>
<VirtualHost *:443>
ServerName subdomain2.example.com
……
SSLEngine on
SSLProtocol all -SSLv2
SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
SSLCertificateFile /path/to/your/ssl.crt
SSLCertificateKeyFile /path/to/your/ssl.key
……
</VirtualHost>
If my understanding is correct of apache, it will enter the first virtualhost it finds that is SSL in this case and use the certificate details in there to decrypt the request. If the hostname does not match at that point it will move along to the next virtualhost that it can match and try there.

I am currently working on a new startup called ShootProof. ShootProof utilizes many of Amazon web services. We recently have been hearing sporadic feedback from our beta testers that sometimes their uploads are slower than they think they should be. Currently the way we are accepting uploads is that we send each file up via XMLHttpRequest to our EC2 instances, doing some quick inspection of the file and then store it in an upload bucket. A few moments later a re-sizer batch job comes along and does resizing/watermarking/other stuff to the photo and moves it into place.

After we started to investigate why some beta testers were sometimes getting slower than ideal upload speed we decided to test out the ability to do out uploads directly to S3. Amazon S3 support HTTP POST uploads which is great as it takes us out of the middle of all of that traffic. Essentially what this means is that users of ShootProof should never be limited upload-wise by our EC2 instance. Also we will not need to constantly spool up and down EC2 instances to handle load spikes. After each upload is completely sent to S3 we will fire off a small notification call that will let us know we have a new photo we need to take care of. Upload traffic to our EC2 instances will drop by at least 99%. Also to be sure that we never miss a new photo that is placed into our S3 upload bucket we will monitor the contents up the upload bucket to ensure that they match what we are expecting. All photos that are uploaded into S3 by the user are marked to have an ACL of private so that they are essentially being put into a dropbox.

Below is a table that shows the tests that we did to come to our conclusion to post directly to S3. The file that was used for this test is a 13.1MB JPEG. All uploads were done using a internet connection that is a full 10Mbit up.

XMLHttpRequest Post (EC2 -> S3) HTTPS S3 Post HTTP S3 Post
13.2 20.6 9.5
14 19.1 9.6
13 cialis pas cher en pharmacie.7 22.7 10.3
14.3 24.9 9.3
13.7 15.3 9.4
13.9 18.4 9.2
24.1 24.2 9.7
13.7 17.5 9.4
13.8 17.3 9.9
15.2 17.4 9.4
14.96 sec 20.04 sec 9.57 sec Average

With AT&T’s new A-List feature for accounts of at least a certain threshold comes a new problem, which numbers to include in the list. While this might be easy for some, it wasn’t that simple for me so I wrote a little script to compute what my optimal A-List would be. If you are an AT&T wireless customer, give this a shot, it might save you some more or add to your rollover balance! If you have any questions about this script or find a bug you can find my contact information on the about page.