stev0dundun 270005274B タグ:  team_red ios application qr_code qr facebook hackathon android 1,291 件のアクセス
JOSH CONSTINESunday, via TechCrunch
When Mark Zuckerberg called for a “Space Hackathon” to decorate Facebook’s massive new headquarters at 1 Hacker Way, he probably didn’t expect employees to take him so literally. A few scurried up to the roof with some tar paint, and now there’s a 42-foot wide QR code on the roof that’s visible from space.
Scanning it opens the new FB QR Code Page on Facebook which may host puzzles, jokes, and other flavor to humanize the company. For now you’ll need an airplane or Facebook security badge to get a look at it first-hand, but once indexed it should appear on your favorite satellite mapping website.
When the Hackathon was announced, most employees imagined beautifying the campus with posters and spray paint murals but Mark Pike had something bigger in mind. “It started with a comment on Zuck’s post. I wrote, “Hack yeah! I’d like to paint a gigantic QR code somewhere so we can RickRoll online maps, or point people to our careers site, or send them to a ‘Clarissa Explains it All’ GeoCities Page” Pike says.
At 8pm the night of the Hackathon, Pike and 5 others started on the project. By midnight they had a crew of 30, but it would take until daylight to see whether they screwed up the layout. An employee hacked a Canon camera’s firmware, strapped it to a home-made remote-controlled “frankencopter”, and flew it over the roof.
When it landed with the photos the team finally found out the code really works. Facebook let us on the roof to check it out, and it’s pretty epic. You can try scanning the big picture above with Scan for iOS, QR Droid for Android, or most other QR apps.
What started in a dorm room has matured into one of the world’s most powerful companies. Now there are database architecture and business models to worry about. But with a silly 42-foot wide QR code on the roof, Facebook proves it’s still young at heart.
stev0dundun 270005274B タグ:  social_networking like_button techcrunch zuckerberg team_red facebook 954 件のアクセス
-JON ORLIN, via TechCrunch
In 2010, TechCrunch broke the news that Facebook was going to release a “Like” button for the whole darn Internet. Now, TechCrunch has learned Facebook is considering a “Hate” button as well.
According to Facebook’s S-1 filing, users are now generating 2.7 billion Likes and Comments per day. With the Hate button, Facebook expects to at least double that. The S-1 noted “popular Pages on Facebook include Lady Gaga, Disney, and Manchester United, each of which has move than 20 million Likes.” Many inside the company think the Hates could easily top that.
When the original Like button was announced, Mark Zuckerberg made a bold prediction there would be over 1 billion Likes across the web in just the first 24 hours. Sources at Facebook say Mark is estimating 2 billion Hates on the first day. Facebook studies have shown the sad fact that people hate things on the Internet more than they like things. There’s also an internal debate on whether the new button should be called “Hate” or “Dislike.”
Since the tiny Like button makes up such a huge part of Facebook’s revenue, the introduction of the Hate button could raise Facebook’s valuation further ahead of the IPO.
Facebook has already shown they are open to changing the Like button. Earlier this month, Facebook Mobile changed the 2-Click Like button with a 1-Click Like bar.
The company has also experimented with the “Fax” button, as TechCrunch was also the first to notice.
Other buttons under consideration are the “Meh”, “Love”, “Who Cares”, and “+11″ but there is also a fear this could lead to a button explosion.
Our sources say the Hate button is not a sure thing. It’s being heavily debated inside the social networking company. This new feature would fit with Facebook’s mission to “build tools to help people connect with the people they want and share what they want” whether that’s love or hate.
While the product and sales teams favor the idea, many inside Facebook oppose it. That view is best summed up by Robert Scoble who wrote “I really hope we never see a hate button that gets wide adoption. The world has enough hate as it is.”
Since Facebook is in their quiet period ahead of their IPO, Facebook had no official comment on this report.
Piriform is a private limited company founder in London, UK in 2004. The company is located in 78 York Street, London United Kingdom. CCleaner is one of the main products that are currently developed by Piriform. CCleaner cleans unwanted leftover files from computer program such as Internet Explorer, Google Chrome, Opera, Safari, Windows Media Player, eMule, Google Toolbar, Netsacape, Microsoft office, Nero, Adobe Acrobat, Adobe Flash Player, Sun Java, WinRAR and other application. CCleaner also has function to clean up Windows Registry and an uninstall tool. CCleaner can be installed on both Windows or Mac OS X. The program is available in 47 languages and free.
stev0dundun 270005274B タグ:  cloud box.net dropbox google+ team_red google google_drive cloud_storage 1,399 件のアクセス
-Caleb Garling, via Wired.com/Cloudline
A purported leaked screenshot of a Google Drive download page. Image courtesy of TalkAndroid
This morning, Ars Technica’s Jon Brodkin reported on newly leaked images of Google Drive, a rumored cloud storage offering from the web storage king. Gmail already offers more than 7 gigabytes (GB) of email storage, so the question was, what will Google Drive offer. From these leaked images — the validity of which is unverified — the answer is 5GB, even though last week it was 2GB.
Why the screenshots would show different starting sizes is unclear. Assuming the images are real, they could have been taken from different stages of development, or may not be representative of the final version that Google will offer.
In the last few months rumors have swirled any many have speculated on when exactly Google is going to jump into the consumer cloud storage market. Companies like DropBox and Box both offer gigabytes of free storage so that anyone can access their PDF’s, pictures and documents from any web browser. Despite not yet having a dedicated service, Google is still the web’s storage giant, so such a product is inevitable.
Have your say: Will you take Google Drive for a ride if 5 GB is on offer? Should the folks at DropBox be worried?
-Mike Barton, via Wired.com/Cloudline
While coverage ahead of Oracle’s fiscal third quarter results yesterday focused on it losing ground to younger cloud rivals, my question of “But for how long?” did not take long to be answered, sort of.
“After a long period of testing … Oracle’s cloud applications will be generally available. We’ve named our cloud the Oracle Secure Cloud,” Oracle CEO Larry Ellison said during yesterday’s analyst call about its Q3 results.
Oracle President Mark Hurd also stated in the press release before the call, “…Fusion in the Cloud is winning with great success against niche HCM cloud vendors in the US and worldwide. Our modular, integrated platform of 100 apps available in the cloud or on-premise is a key differentiator.”
Over at Forbes, Victoria Barret highlights how Ellison could not resist going after big-fish rivals Salesforce.com and SAP after he planted the Oracle Secure Flag brand in the ground:
Here he couldn’t help but take aim at Salesforce.com, suggesting that the company run by his former protege, Marc Benioff, can’t offer the same level of security. Benioff has long ridiculed Ellison for selling legacy software systems not able to keep pace with the shift to cloud computing.”
Then Ellison swiftly moved on to SAP, explaining that the German rival hasn’t yet moved its heavy-duty business software suite to the cloud. “Six years ago we made the decision to write Fusion. It will take years for SAP to catch up,” he said. SAP’s Web offering, called Business ByDesign, seems so far limited to smaller customers. SAP in December announced 1,000 customers to the product. Then again, Ellison did not mention customer names for Fusion or Secure Cloud.
Oracle Secure Cloud, which will be available in the next few weeks, is a private cloud, rented by the month and managed by Oracle, but living behind a company’s own firewall in their data center. “Salesforce.com does not offer this kind of security in their cloud. This is a key advantage for us,” Ellison said during the call yesterday. “But by far our biggest application competitor is SAP, not Salesforce.com. And SAP does not even offer CRM, HCM and financial applications in the cloud to their large customers.”
“Six years ago we made the decision to rewrite our ERP and CRM suit for the cloud. We called it Fusion. SAP called it confusion,” Ellison said. “It will take years for SAP to catch up.”
Ellison, well known for his flamboyance and fierce competitiveness, even went so far as to question SAP’s sobriety with its focus on building Oracle competitor HANA. “When SAP, and, specifically Hasso Plattner, said they’re going to build this in-memory database and compete with Oracle, I said. God, get me the name of that pharmacist, they must be on drugs,” he told analysts yesterday. “That was interpreted by Hasso as Larry doesn’t believe in in-memory databases… We’ve been working on in-memory databases for 10 years. We have the world’s leading in-memory database. It’s called TimesTen.”
So that’s where Oracle has planted its flag with regard to the cloud, in contrast to Salesforce and Workday, and butting heads with SAP. Is that going to do the trick? Is Oracle keenly aware of what the marketplace wants, or is it putting its game face on given what it can offer in terms of cloud now? Do the RightNow and Taleo buys, in addition to Fusion (in the cloud) give it enough to go on? Have your say in the comments section, below.
stev0dundun 270005274B タグ:  feedback profit google team_red consumer consumer_surveys 977 件のアクセス
-Adario Strange, via PCMag.com
In recent years Google has taken its fair share of criticism from publishers as its Google News aggregation and AdWords micro-advertising have disrupted traditional publishing in major ways. But a new product quietly launched by Google this week might provide a powerful new business model for online publishing.
Google Consumer Surveys allows publishers to make money from running various micro-surveys on their sites. When a user visits a participating site, they will be presented with a survey before being allowed access to the content (text, video, or apps). Think of it as a soft paywall in which the user still gets the content for free, and doesn't need to register, but can't simply click the well-known "skip this ad" link to access the desired content. Once the short survey is filled out, the users gets her content for free, the publisher earns a small payment, and the company behind the survey gets the valuable market data it was looking for from a real, sometimes demographically specific person.
Large or small companies can target survey questions toward the general U.S. population for $0.10 per response (or $150.00 for 1,500 responses), or opt for demographic targeting at $0.50 per response ($750.00 for 1,500). Insights are grouped by demographics including income, location (U.S. Northeast, South, Midwest, West Coast), age (18-24, 25-34, 35-65+) and gender.
After setting up a survey, companies have the ability to view extremely detailed breakdowns of the survey answer data. Alcohol, tobacco, gambling, and pharmaceutical products are currently excluded from the program. Publishers already set up to use the survey tool with their content offerings include The Texas Tribune, the Star Tribune, and Adweek.
"The idea behind Google Consumer Surveys is to create a model that benefits everyone," said Google product manager Paul McDonald. "You get to keep enjoying your favorite online content, publishers have an additional option for making money from that content, and businesses have a new way of finding out what their customers want."
Upon further inspection, it does appear that Google may have finally discovered the Holy Grail for monetizing digital content in a way that benefits everyone. Few consumers have a problem filling out short, anonymous surveys, most online publishers have already learned that surveys are a fun way to engage visitors, particularly when it comes to niche sites, and large companies absolutely live and die on the vital data that market research provides regarding emerging trends, and current consumers tastes.
stev0dundun 270005274B タグ:  cloud ibm privacy transparency team_red information personal_data 921 件のアクセス
-Jon Udell, via Wired.com/Cloudline (sponsored by IBM)
As we migrate personal data to the cloud, it seems that we trade convenience for privacy. It’s convenient, for example, to access my address book from any connected device I happen to use. But when I park my address book in the cloud in order to gain this benefit, I expose my data to the provider of that cloud service.
When the service is offered for free, supported by ads that use my personal info to profile me, this exposure is the price I pay for convenient access to my own data. The provider may promise not to use the data in ways I don’t like, but I can’t be sure that promise will be kept.
Is this a reasonable trade-off?
For many people, in many cases, it appears to be. Of course we haven’t, so far, been given other choices. And other choices can exist. Storing your data in the cloud doesn’t necessarily mean, for example, that the cloud operator can read all the data you put there. There are ways to transform it so that it’s useful only to you, or to you and designated others, or to the service provider but only in restricted ways.
Early Unix systems kept users’ passwords in an unprotected system file, /etc/passwd, that anyone could read. This seemed crazy when I first learned about it many years ago. But there was a method to the madness. The file was readable, so anyone could see the usernames. But the passwords were transformed, using a cryptographic hash function, into gibberish. The system didn’t need to remember your cleartext password. It only needed to verify that when you typed your cleartext password at logon, the operation that originally encoded its /etc/passwd equivalent would, when repeated, yield a matching result.
Everything old is new again. When it was recently discovered that some iPhone apps were uploading users’ contacts to the cloud, one proposed remedy was to modify iOS to require explicit user approval. But in one typical scenario that’s not a choice a user should have to make. A social service that uses contacts to find which of a new user’s friends are already members doesn’t need cleartext email addresses. If I upload hashes of my contacts, and you upload hashes of yours, the service can match hashes without knowing the email addresses from which they’re derived.
In the post Hashing for privacy in social apps, Matt Gemmell shows how it can be done. Why wasn’t it? Not for nefarious reasons, Gemmell says, but rather because developers simply weren’t aware of the option to uses hashes as a proxy for email addresses.
The best general treatise I’ve read on this topic is Peter Wayner’s Translucent Databases. I reviewed the first edition a decade ago; the revised and expanded second edition came out in 2009. A translucent system, Peter says, “lets some light escape while still providing a layer of secrecy.”
Here’s my favorite example from Peter’s book. Consider a social app that enables parents to find available babysitters. A conventional implementation would store sensitive data — identities and addresses of parents, identities and schedules of babysitters — as cleartext. If evildoers break into the service, there will be another round of headlines and unsatisfying apologies.
A translucent solution encrypts the sensitive data so that it is hidden even from the operator of the service, while yet enabling the two parties (parents, babysitters) to rendezvous.
How many applications can benefit from translucency? We won’t know until we start looking. The translucent approach doesn’t lie along the path of least resistance, though. It takes creative thinking and hard work to craft applications that don’t unnecessarily require users to disclose, or services to store, personal data. But if you can solve a problem in a translucent way, you should. We can all live without more of those headlines and apologies.
stev0dundun 270005274B タグ:  greece team_red thepiratebay low_orbit_server_stations torrent loss airspace 788 件のアクセス
FOR IMMEDIATE RELEASE.
Athens, Greece - Political power in Athens, Greece, today signed an agreement with representatives for The Pirate Bay (TPB) about exclusive usage of the greek airspace at 8000-9000ft.
- This might come as a shock for many but we believe that we need to both raise money to pay our debts as well as encourage creativity in new technology. Greece wants to become a leader in LOSS, says Lucas Papadams, the new and crisply elected Prime Minister of Greece.
LOSS that he is referring to is not the state of finances in the country but rather Low Orbit Server Stations, a new technology recently invented by TPB. Being a leader for a long time in other types of LOSS, TPB has been working hard on making LOSS a viable solution for achieving 100% uptime for their services.
- Greece is one of few countries that understands the value of LOSSes. We have been talking to them ever since we came up with the solution seeing that we have equal needs of being able to find financially sustainable solutions for our projects, says Win B. Stones, head of R&D at TPB.
The agreement gives TPB a 5 year license to use and re-distribute usage of the airspace at 8000-9000 ft as well as unlimited usage of the radio space between 2350 to 24150 MHz. Due to the financial situation of both parties TPB will pay the costs with digital goods, sorely needed by the citizens of Greece.
stev0dundun 270005274B タグ:  wired web_standards high_def mobile ipad browsers apple web_basics team_red 1,191 件のアクセス
-Scott Gilbertson, via WebMonkey.com
The high-resolution retina display iPad has one downside — normal resolution images look worse than on lower resolution displays. On the web that means that text looks just fine, as does any CSS-based art, but photographs look worse, sometimes even when they’re actually high-resolution images.
Pro photographer Duncan Davidson was experimenting with serving high-resolution images to the iPad 3 when he ran up against what seemed to be a limit to the resolution of JPG images in WebKit. Serving small high-resolution images — in the sub-2000px range — works great, but replacing 1000px wide photographs with 2000px wide photos actually looks worse due to downsampling.
The solution (turns out) is to go back to something you probably haven’t used in quite a while — progressive JPGs. It’s a clever solution to a little quirk in Mobile Safari’s resource limitations. Read Davidson’s follow-up post for more details, and be sure to look at the example image if you’ve got a new iPad because more than just a clever solution, this is what the future of images on web will look like.
As Davidson says:
For the first time, I’m looking at a photograph I’ve made on a screen that has the same sort of visceral appeal as a print. Or maybe a transparency laying on a lightbox. Ok, maybe not quite that good, but it’s pretty incredible. In fact, I really shouldn’t be comparing it to a print or a transparency at all. Really, it’s its own very unique experience.
But how could you go about serving the higher res image to just those screens with high enough resolution and fast enough connections to warrant it?
So what’s a web developer with high-res images to show off supposed to do? Well, right now you’re going to have to decide between all or nothing. Or you can use a hack like one of the less-than-ideal responsive image solutions we’ve covered before.
Right now visitors with the new iPad are probably a minority for most websites, so not that many people will be affected by low-res or poorly rendered high-res images. But Microsoft is already prepping Windows 8 for high-res retina-style screens and Apple is getting ready to bring the same concept to laptops.
The high-res future is coming fast and the web needs to evolve just as fast.
In the long run that means the web is going to need a real responsive image solution; something that’s part of HTML itself. An new HTML element like the proposed <picture> tag is one possible solution. The picture element would work much like the video tag, with code that looks something like this:
The browser uses this code to choose which image to load based on the current screen width.
The picture element would solve one part of the larger problem, namely serving the appropriate image to the appropriate screen resolution. But screen size isn’t the only consideration; we also need a way to measure the bandwidth available.
At home on my Wi-Fi connection I’d love to get Davidson’s high-res images on my iPad. When I’m out and about using a 3G connection it would be better to skip that extra overhead in favor of faster page load times.
Ideally browsers would send more information about the user’s environment along with each HTTP request. Think screen size, pixel density and network connection speed. Developers could then use that information to make a better-informed guess about which images it to serve. Unfortunately, it seems unlikely we’ll get such tools standardized and widely supported before the high-res world overtakes the web. With any server-side solution to the bandwidth problem still far off on the horizon, navigator.connection will become even more valuable in the mean time.
Further complicating the problem are two additional factors, data caps on mobile connections and technologies like Apple’s AirPlay. The former means that even if I have a fast LTE connection and a high-resolution screen I still might not want to use my limited data allotment to download high-res images.
AirPlay means I can browse to a site with my phone — which would likely trigger smaller images and videos since it’s a smaller screen — but then project the result on a huge HD TV screen. This is not even a hypothetical problem, you can experience it today with PBS’s iPhone app and AirPlay.
Want to help figure out how the web needs to evolve and what new tools we’re going to need? Keep an eye on the W3C’s Responsive Images community group, join the mailing list and don’t be shy about contributing. Post your experiments on the web and document your findings like Davidson and countless others are already doing.
It’s not going to happen overnight, but eventually the standards bodies and the browser makers are going to start implementing solutions and the more test cases that are out there, the more experimenting web developers have done, the better those solutions will be. It’s your web after all, so make it better.
stev0dundun 270005274B タグ:  http protocol internet google spdy backend web_basics team_red web_standards microsoft 1,143 件のアクセス
-Scott Gilbertson, via WedMonkey.com
Microsoft wants in on the drive to speed up the web. The company plans to submit its proposal for a faster internet protocol to the standards body charged with creating HTTP 2.0.
Not coincidentally, that standards body, the Internet Engineering Task Force (IETF), is meeting this week to discuss the future of the venerable Hypertext Transfer Protocol, better known as HTTP. On the agenda is creating HTTP 2.0, a faster, modern approach to internet communication.
One candidate for HTTP 2.0 is Google’s SPDY protocol. Pronounced “speedy,” Google’s proposal would replace the HTTP protocol — the language currently used when your browser talks to a web server. When you request a webpage or a file from a server, chances are your browser sends that request using HTTP. The server answers using HTTP, too. This is why “http” appears at the beginning of most web addresses.
The SPDY protocol handles all the same tasks as HTTP, but SPDY can do it all about 50 percent faster. Chrome and Firefox both support SPDY and several large sites, including Google and Twitter, are already serving pages over SPDY where possible.
Part of the IETF’s agenda this week is to discuss the SPDY proposal, and the possibility of turning it into a standard.
But now Microsoft is submitting another proposal for the IETF to consider.
Microsoft’s new HTTP Speed+Mobility lacks a catchy name, but otherwise appears to cover much of the same territory SPDY has staked out. Though details on exactly what HTTP Speed+Mobility entails are thin, judging by the blog post announcing it, HTTP Speed+Mobility builds on SPDY but also includes improvements drawn from work on the HTML5 WebSockets API. The emphasis is on not just the web and web browsers, but mobile apps.
“We think that apps — not just browsers — should get faster,” writes Microsoft’s Jean Paoli, General Manager of Interoperability Strategy.
To do that, Microsoft’s HTTP Speed+Mobility “starts from both the Google SPDY protocol and the work the industry has done around WebSockets.” What’s unclear from the initial post is exactly where HTTP Speed+Mobility goes from that hybrid starting point.
But clearly Microsoft isn’t opposed to SPDY. “SPDY has done a great job raising awareness of web performance and taking a ‘clean slate’ approach to improving HTTP,” writes Paoli. “The main departures from SPDY are to address the needs of mobile devices and applications.”
SPDY co-inventor Mike Belshe writes on Google+ that he welcomes Microsoft’s efforts and looks forward to “real-world performance metrics and open source implementations so that we can all evaluate them.”
Belshe also notes that Microsoft’s implication that SPDY is not optimized for mobile “is not true.” Belshe says that the available evidence suggests that developers are generally happy using SPDY in mobile apps, “but it could always be better, of course.”
The process of creating a faster HTTP replacement will not mean simply picking any one vendor’s protocol and standardizing it. Hopefully the IETF will take the best ideas from all sides and combine them into a single protocol that can speed up the web. The exact details — and any potential speed gains — from Microsoft’s HTTP Speed+Mobility contribution remain to be seen, but the more input the IETF gets the better HTTP 2.0 will likely be.
-Scott Gilbertson, via WebMonkey.com
The corporate social web still sucks
Expert Labs, the non-profit organization behind ThinkUp, a web-based data-liberation and analytics application, is rebooting into a commercial entity.
No need to panic if you use ThinkUp to back up your social network life; the application will remain open source and freely available.
But Expert Labs is going away and ThinkUp is refocusing on a larger goal — liberating your online social life from the clutches of corporate web entities.
In its own words the new ThinkUp wants to build “an information network that connects to today’s social networks, but isn’t centralized and dependent on a company or investors.”
That’s not an entirely new idea. Diaspora and some other projects are trying to do the same thing, but ThinkUp is taking a different approach — it wants to build an app first and focus on the user experience rather than the underlying technology.
In fact ThinkUp already is an app that’s pretty close to what it’s aiming to do. ThinkUp is a web-based app that pulls your data out of social silos like Facebook or Twitter and stores it on your own server. You control your own data, and have a record of your conversations potentially long after Facebook, Twitter and the rest have become mere footnotes in the history of the web.
For more on how ThinkUp works and how you can use it be sure to check out our earlier coverage and then grab the code and try it for yourself.
So what of ThinkUp’s new, loftier goals? Is any attempt to replace Facebook doomed to failure? Of course not. Everything is replaceable, just ask MySpace. And ThinkUp believes its approach is different. “Prior attempts have tried to solve this problem based on the assumption that it is a technical challenge,” says ThinkUp’s Knight News Challenge application. “We believe it to be a social one.” ThinkUp’s focus going forward will be on the social and the interface:
We will draw people in through a compelling media site that encourages participation via our decentralized platform… a peer-to-peer network that powers a great media property with broad appeal — imagine if Digg or Reddit were open, decentralized and powered by a network instead of votes.
If you’re curious to know what that might look like, head on over to the ThinkUp proposal for the Knight News Challenge and click the heart icon to “like” it (incidentally if the Knight New Challenge sounds familiar, that might be because it’s also the birthplace of EveryBlock). In the meantime, work on the ThinkUp app continues with a new release that improves the charts and graphs and paves the way for the coming Foursquare support. Check out the ThinkUp GitHub page for more details.
-Scott Gilbertson, via WebMonkey.com
Yahoo has announced it will soon support the Do Not Track privacy header across its sprawling network of websites. Supporting Do Not Track means you will soon be able to easily tell Yahoo to stop tracking your movements around the web.
Much like the Do Not Call registry, the Do Not Track system offers a way to opt out of this third-party web tracking.
The Do Not Track header began life at Mozilla, but has since moved to the W3C where it was converted into a web standard by the Tracking Protection Working Group.
The Do Not Track header now works in every major desktop browser except Google Chrome, though none of them turn it on by default. Still, for privacy-concerned users savvy enough to enable Do Not Track, the header offers a quick and easy way to tell advertisers that you don’t want to be followed while you browse the web.
Numerous online advertising groups already respect the Do Not Track header and refrain from tracking users that enable it. Today’s announcement means that, starting this summer, you can add Yahoo to the list of companies that will stop tracking you if you’ve enabled Do Not Track in your web browser.
Of course, there are still many advertisers and websites that don’t yet support Do Not Track. If you’re concerned about your online privacy and don’t want to rely on the goodwill of advertisers, there are other, more aggressive steps you can take to limit how your tracked on the web.
In “The hidden risk of a meltdown in the cloud,” a Technology Review blogger reacts to a paper by Bryan Ford on “the unrecognised risks of cloud computing.” I don’t know, the risks seem familiar to me. Beyond security, they are:
Unpredictable interactions among loosely-coupled services
Inability to preserve or reproduce an application or data set
The Technology Review blogger, who is evidently known only by the nom de plume Kentucky FC, echoes Ford’s conclusion: We ought to study these risks “before our socioeconomic system becomes completely and irreversibly dependent on a computing model whose foundations may still be incompletely understood.”
OK, yes, we should study the risks. But that doesn’t mean we can’t engage with the cloud while doing so. It isn’t an all-or-none proposition.
Think about our relationship to the power grid. We are, in fact, irrevocably committed to it. And it is prone to occasional dramatic failures. I have a few friends who live off the grid, but most of us plug in, and then some hedge their bets to varying degrees. Do you own a generator? If so, how much of your demand does it power? And for long? An hour? A day? A week?
For enterprises, a hybrid strategy that blends cloud and on-premise resources is gaining traction. That’ll make sense for individuals too. Our personal clouds encompass resources both in the sky and scattered across our own devices. As we extend into the cloud we’ll learn how to use it to complement the strengths and offset the weakness of our local setups. There is, as always, a continuum of risk and benefit. We’ll make personal choices to occupy points along that continuum. And those points will drift over time.
Meanwhile, let’s consider one analogy drawn by Bryan Ford and echoed by Kentucky FC.
Ford: Non-transparent layering structures … may create unexpected and potentially catastrophic failure correlations, reminiscent of financial industry crashes.
KFC: The cloud could suffer the same kind of collapses that plague the financial system….
It’s true that the unpredictability of complex interaction is a similar concern in both realms. But when things have gone wrong, cloud providers have been refreshingly open about it. Consider the post-mortems for some notable Amazon Web Services (AWS) and Azure outages. Both set a high standard, explaining what went wrong, why, how it was fixed, and what steps are being taken to prevent a recurrence.
We can only dream of a financial industry that runs as transparently, and holds itself to such a standard.
Search is the great triumph of computer science and mathematics. A multi-billion dollar industry was built from a highly technical paper about random walks on the web, which was becoming more obtuse as it grew exponentially.
Google’s search breakthrough ensured that the web would not be a victim of its own success.
Now, the social web faces a similar problem. It is enormous, and growing, and central to our lives. There are many successful companies in the social space, just as there were search leaders before Google emerged. Yet so far there is no Google for the social graph.
It’s a huge opportunity. But the challenges may even be more daunting than dynamically assigning relevance to any given webpage — as huge an idea as that was when Google re-invented search with PageRank.
There are obvious similarities to the challenge of indexing the social graph, but special problems as well. For one thing, there doesn’t appear to be anything to generalize over the entire social graph, so maybe there’s no search-level problem that needs to be solved: Perhaps it’s a collection of specific problems.
Like the larger web graph — the sum total of all web pages and the hyperlinks — the social graph is people and the connections among them. A non-virtual social graph has always existed: People get married, have children, have friends, are employed, and so on. More and more non-virtual relationships have an analog in the digital realm, like a “real” friend who is also your Facebook friend. But some relationships exist only in the digital realm — poking someone on Facebook or following someone on Twitter.
To be fair, there are specialized applications within specific social networks, such as LinkedIn’s “People you may know” feature and Twitter’s (grammatically suspect) “Who to follow.” Other applications of social web data are often domain-specific: last.fm recommends music that you might like, and Netflix recommends movies that you might like by looking at the preferences of millions of people.
What’s needed is something which links up these islands, where we all live part of the time, into a single, contiguous nation.
It won’t be easy. I’d like to offer up four challenges that I find important, though undoubtedly there are more:
What problem are we trying to solve? Search solved the problem of proliferation of web pages that were no longer captured by directories. A good question to ask is: What’s the guiding central problem of the social universe?
A person is the sum of all of their profiles: Identity across social networks must be solved. Linking Facebook, Twitter, Google Reader, LinkedIn, etc. would be invaluable to researchers. Actions across social networks are similar (liking, following/friending, sharing, etc.), so to have a complete list of actions from a single individual across networks would vastly increase the amount of data available from looking at a single social network.
Every user has her own slice of the social graph: No two social graphs look alike, whereas the web graph looks the same for everyone.
Let data be free: Many types of social data are not public or are difficult to get. All Twitter data is only accessible to the select few members of the firehose club. Facebook data is available for only a select few users. Search was made possible by web crawlers and a similar accessibility of data must be in place for the social graph. Of course, accessibility of data brings up lots of privacy concerns.
So, cool things are being done with subsets of the social graph, but is there going to be one company to rule them all? Put another way, web graph is to Google as social graph is to … what?
Many new players, including my company, are betting on discovery as the answer. Today, discovery is applied to specific genres — restaurants, movies, books, friends — but to give you recommendations, it needs to harness and use a lot of data from the social graph. In theory, once you’ve done one genre well you should be able to do the others.
I’m interested in your opinions. Have I missed a company that’s using the social graph in a really unique way? Perhaps I’m asking for too much? I’m still optimistic: That there will be a Google-equivalent to the social graph and that company will be the next big thing.
-Michael Clausen of Coventry, CT.
"Slacktivism" and "clicktivism" have been used as criticisms to social media activism. The so-called slacker activism theorizes that people on social media platforms only participate in feel-good clicking (they may like something, but have little care for it later), which doesn't cause real change and action in the world. I keep thinking about KONY 2012 and what may happen to the movement this year (will it be a success?).
So then, how effective is social media activism, if at all? Are people who use social media platforms more likely to cause change than those who do not? How should activism campaigns be designed? How can they be more effective in creating change?