This post has moved to my personal blog site. Please read it there.
Today I read an article on opensource.com called "Unschooling is the open source way". I'm fascinated with the subject of alternative approaches to education. I think that humans are naturally curious. We are driven to want to explore and create. Somewhere along the way that behavior is either rewarded and nurtured or discouraged and discarded.
In my youth, people who cared very much for me tried to help me to focus my curiosity in specific directions. Some of the things that attracted me were considered "not productive" and I was discouraged from pursuing those things to some degree. I was never a very good follower– sort of a lone explorer– so I never fully grasped onto the so-called "important things" and I never got a complete foothold on what I was naturally wired for. Needless to say my formal education process was a bit of a struggle. Now, in the "real world" I find myself embracing my genuine curiosities and not simply trying to do what I am directed to do. The result? I'm having a better time than I ever have in my life and applying skills to my job that no one would have thought to suggest.
I think our future world demands that we stay curious. We have moved far beyond the Industrial Age's need for a person who will spend their entire career performing a set of menial tasks. Technology can and should step in for those things because humans are too valuable for them. Unschooling specifically cultivates the curiosity that is natural to children and helps them learn how to learn. Once you know how to teach yourself things then you can go where need takes you.
Here is a brief TED video by Gever Tulley, discussing his approach to teaching life lessons through tinkering:
There is also a transcript of some Q & A with Gever which covers some of this in greater depth.
Our future society demands people who have multiple skills with the ability to gain more when the situation demands it. They need to be driven to move forward despite the obstacles that are thrown at them by their forebearers. Perhaps a healthy dose of unschooling is just the right way to make it happen.
cmw.osdude 120000QT77 Tags:  gpg hadoop public gkrypt pgp mapreduce data big key encryption 5,515 Views
Finding the right tool for big data
There are generally multple ways to solve a complex problem. The right solution for you will depend upon your skills, your resources and your personality. (Yes, companies and development teams have a personality.) With big data there are a lot of new approaches to thinking about data, which are all very cool, but they can be overwhelming if you are used to approaching things from a certain point of view. Dr. Sherif Sakr examines some of the different ways of working with big data and helps you identify which ones might best leverage your existing skills and understandings.
It's always great to grow and learn new things, but sometimes it's nice to start from the standpoint of familiarity. Personally, I'd probably gravitate to Hive because I like the SQL-like feel. You might be different. Check out his article, "Use SQL-like languages for the MapReduce framework" and give it a rating.
The better to encrypt you, my dear
I've been fascinated with encryption since I first read about secret codes as a kid. PGP (Pretty Good Privacy) and later GPG (Gnu Privacy Guard) are interesting tools that I wish more people would use.
One of the challenges of using encryption casually is that it does require a good deal of number crunching. Would that we could harness more power to help with the encryption. Say! What about the graphics processor? Protect your data at the speed of light with gKrypt, Part 1" and give it a rating.begins a two-part series exploring gKrypt, a tool which employs general purpose graphics units (GPGPUs) for data encryption. This could make it easier for you to secure things from identity thieves and other nosy people. Check out "
My public key
If you ever want to interact with me with encrypted keys, here is my public key for email@example.com.
I embedded the search using an iframe so it would always be current. (I had to make a key change the other day.) If you can't see it with your browser, then you can always get it at this URL: http://keyserver.ubuntu.com:11371/pks/lookup?search=chris%40opensourcedude.com&op=get.
This post has moved to my personal blog site. Please read it there.
cmw.osdude 120000QT77 Tags:  ozzie open_source microsoft memo future opensource ray 5,467 Views
I've been getting news here and there about the parting thoughts of Microsoft's software chief, Ray Ozzie. He wrote a blog entry which appears to see the future as not wrapped up on a PC. I'm still chewing on this one.
My general view of Open Source is that it should take on the things that are so fundamental that they should belong to everyone, or things that are ignored because they can't be sufficiently monetized. (Obviously Open Source moves beyond that scope, and shall as long as there are technical enthusiasts who want to make things better for the sake of having them be better.) As devices like smart phones, and things we haven't even imagined yet, become a part of our daily lives, the landscape changes. Trying to hold onto the old paradigms (did I just use that word?) becomes unreasonable. Things keep moving forward.
Perhaps we really are entering a stage of computing where the hardware and the specific applications become less important than the integration and accessibility. I'd like to believe that Open Source can play a huge role in this by providing a sort of neutral space where everyone can play without having to "lose." Companies could roll their expertise and service into popular and effective projects. This doesn't mean that people won't pay for anything anymore. Many of us choose to pay for service or packaging or expertise in areas where we could probably do it ourselves. Technology will be no different.
I have no grand insights here. Like I said, I'm still chewing on this. It's very interesting, though. Very interesting.
This post has moved to my personal blog site. Please read it there.
A friend of mine, Neil Gilmore, a talented developer, taught me the phrase "fully buzzword compliant". It's like something in a Dilbert cartoon, where jargon takes on a life of its own. I use it often.
Terms like REST and RESTful come up regularly, but I rarely see them in enough context to be understood by someone who doesn't already know what it means. That's why I was thrilled to see "Understand Representational State Transfer (REST) in Ruby" by M. Tim Jones. The term is even described in the title! If you are working with Ruby I think you'll find this a useful read. Even if you don't use Ruby I think there is information that will help you become more fully buzzword compliant, and be able to better consider the value of the REST architecture for your own projects.
Last week I attend the Austin Forum on Technology session "From Sous Vide to Social Search, How Technology is Changing How We Cook and Eat" at the University of Texas in Austin. The video is not up yet, but I'll post it here when it appears.
We all eat... at least I'm pretty sure that we do. In the United States we have a relationship with food. Restaurants are a big part of Austin culture, and the culture of many cities. "Arm chair" chair chefs have begun to rival "armchair quarterbacks" as people watch the plethora of cooking shows and networks on television. So, it's no wonder that the auditorium for this forum was nearly full (there were easily two hundred people).
The speakers were Addie Broyles and Michael Chu. Addie is a food writer and blogger for the Austin American-Statesman. (Read her blog, "Relish Austin".) She talked about the ways in which technology is changing how we interact with and communicate about food. I mentioned the cooking shows. There are also hundreds of food blogs where people share restaurant experiences, dietary thoughts, nutrition discoveries and personal cooking adventures. Not so long ago there was a designated professional food critic or two who were the official voices of taste. Now, in addition to the bloggers, we have social media, Yelp and other sites where anyone and everyone can publicly share their praise and disdain for their dining experience, and restaurants can publicly respond and react. It's not just dining out that gets treatment though. People share personal recipes, techniques and nutritional ideas. People with food allergies or conditions that require special diets are able to share their discoveries for enjoying food with restrictions.
Of course, technology doesn't just bring us commentary. There are sites and apps devoted to different things that you may need to do with food. Sites like livestrong.com allow you to track your diet and learn more about the nutritional details of food you eat. (I like everything about that site except for their serious deficiency of not having an Android version of their app. They seem to be pointedly supporting everything but Android. Come on guys! Maybe you need IBM Worklight... but I digress.) There are also sites that will let you enter the food that you have and help you come up with recipes that you can make (e.g. supercook.com and myfridgefood.com). One site, eatyourbooks, will let you enter the titles of the cookbooks that you own and it will help you find recipes for available ingredients.
There is even an app which will estimate calories based on a picture of your food. It's currently only available for iPhone, but I'm sure there will be others. Very interesting stuff!
Next it was Michael Chu's turn. He is an engineer and the author of Cooking for Engineers. He was demonstrating a technique called Sous Vide, which is borrowed from a laboratory technique for accurately and evenly heating substances by using a temperature-controlled water bath. (Read Michael's introduction to the concept on his blog.) Michael was passionate about cooking and enjoys a good steak (a man after my own heart). It was clear that many of the attendees were unprepared for Michael's engineering view of cooking, but I was fascinated with how his knowledge of the process could help one predictably and consistently create the perfect 65 ºC boiled egg. Words really fail me on explaining what we got from Michael's presentation. You'll just need to wait for the video to fully appreciate what he did.
It did get me thinking about the ways in which science, engineering and food overlap. The Sous Vide process that he demonstrated is a tub filled with water. The substance that you want to heat is packaged in some sort of container (except for eggs, which have their own container). Next a heating unit will gently heat the water until it reaches a designated temperature and hold it there for as long as desired. It is not possible to overheat food in this manner because the temperature cannot go over what you've specified. However, the physics of certain foods causes them to do certain things when heated for an extended period of time. (Michael seemed especially intriguiged by the physics of the perfect egg.) Home use of this technique is pretty uncommon, but it is becoming more widely used by restaurants. They can, for example, prepare a container of steaks to a perfect medium-rare temperature and hold them there for hours. Then, when one is ordered it is seared on the grill to give it the final touch. The result would be consistently perfect steaks.
Of course, we already enjoy a number of scientific breakthroughs in our kitchen. Our basic stoves and ovens, refrigerators and such are obvious examples. Some of us still remember what it was like to cook without a microwave oven. Much of the science in food happens behind the scenes though, in the growth, preparation, and transportation of food before it gets to you. Some of this is controversial, and rightly so. We literally are what we eat. It's good to learn more about what is done with your food and what you can do yourself to keep it healthy and tasty.
Video coming soon.
I was poking around through some resources and came across this website called worrydream.com. You really need to take a look at it. I'll wait here.
Wasn't that awesome? What an interesting approach to dealing with information! ... extremely visual... extremely interactive...
What makes this especially cool is that this site is driven by OpenLaszlo, the open-source, Rich Internet Application platform. I'm running Firefox on Linux and it worked just fine for me. Of course, there are some issues with a site like this. OpenLaszlo uses the Flash player to operate, and so it is only available in environments where Adobe is supporting a Flash plugin. (For example, the site does not work on my Droid right now.) There are some open-source alternative flash players, but these are fairly fledgling projects and likely suffer from trying to recreate rather than to create. It would be interesting to see if standards around RIA evolve to the point that there are more solid choices in this arena.
Another issue with worrydream.com is accessibility. As a user, I am blessed to be able to ignore such issues. My eyesight is good enough and I can click around with no problem. However, as an editor I've become painfully aware of these issues and the potential impact that they can have for users. As a sort of eye-candy site, worrydream.com is not necessarily intended to be accessible, and someone with special needs may not really be missing out on anything that they would find valuable. However, if this was a commercial site the designer would have been leaving out a chunk of his audience who might provide business. He'd probably be in violation of some laws as well. It's a difficult problem where technology is both a savior and an obstacle.
Yet, I admit that I would really like some of my work to feel more like what I saw on that web site. I like the almost tactile nature of grabbing things. I know that it's probably not for the masses, but I would prefer an environment that wasn't all about shoving a little arrow around the screen all the time.
Is worrydream.com the future? Who knows. The designer has used his skills to help build other OpenLaszlo-based sites. Maybe there is a new world around the corner.
cmw.osdude 120000QT77 Tags:  security mainframe hacking exploration computing systemz byod technology curiosity 5,413 Views
Clearly people did not get as excited as I did about the Bossies, the Open Source Software awards, that I wrote about in my last entry. Perhaps it's just not very compelling, or perhaps there is just a general lack of curiosity in such things.
I've had my world shaken and stirred a little with recent events-- in a good way. The first has been my involvement in developing a Knowledge Path for System Z (mainframes) where I have had to dive a little bit into that mysterious world. I remember when I worked at the Texas Lottery Commission and the mainframe guys were "over there". The operators were pretty decent, but the admins were scary dudes.
Picture a scene from an old Clint Eastwood spaghetti western. The sysadmin is dressed in black, with an ornate, but well-used six gun prominently displayed on his hip. I wander up as a wide-eyed kid dressed like Huckleberry Finn. "How do I learn more about the mainframe?" I would ask.
This is met with either a steely-eyed stare as the sysadmin says through clenched teeth "You don't... and pray never have to." He then strides away, the wind whipping his long coat around him, but miraculously having no affect on his hat. Later, there are gunshots.
It has been very nice to come into contact with much less scary people in the mainframe world. People who are excited about mainframes and who reward curiosity, but it is still a precious and rare resource and there are many gateways. It's a shame, because there are many interesting ways in which a mainframe could take the place of a number of computing resources, consolidating them together. Imagine a Bring Your Own Device (BYOD) world where I don't have to worry so much about your device being completely secure because I'm not actually running my software there... I'm providing a central resource and using your device as a fancy terminal. How could that make a difference?
In any case, this is very exciting to me and I'm enjoying the chance to see the outstanding engineering that makes the System Z what it is. It is amazing that people were able to think things through so completely... a vast difference from today's rush to market.
The other thing I am working with is a group of hgh school students in a security contest called CyberPatriot. The idea is to get kids interested in technology to have a greater appreciation for how computer security works. I'm a mentor in the group, drawn in because of my Linux background. (Apparently the team was hit with an Ubuntu image last year and they were very confused by being met with a console prompt and a blinking cursor.) It's been interesting, but so far all of the samples have been Windows-based... forcing me to dust off some of my brain cells, since I haven't really had to administer Windows machines with any seriousness for a while now! (There are advantages to being a long-haired techno-freak.)
One of the things that has intrigued me is the difference between how young people approach technology today and how I remember approaching it in my youth. I suppose that part of working with technology in the Eighties was that you really had to know how to make things work or it didn't. Windows was a ways off yet and the blinking cursor on my Commodore 64 or the school's Apple IIe (or the TRS80s) gave you no comfort, no clues as to what to do next. You really had to know something about the moving parts. Interestingly, many of those parts are still there, but buried within all the menus and icons.
It intrigues me that some of these students, who are clearly clever and interested in technology, seem to be experiencing these moving parts fo the first time. Ports and processes were always a part of my computing world. Some of them seem to be discovering these things for the first time. How is that possible? All of them embrace the knowledge eagerly and they are doing great, but it amazes me that one could learn about technology without developing an understanding about how these things work... especially if you are more of a techie type.
Curiosity is one of our most valuable assets as humans. We have always dug deeper as a species, finding out how things work and new ways to apply what we learn. We take things apart. We invent. We misapply what we know in wonderful ways to create new discoveries. It seems to me that some of this curiosity is waning. We seem to be waiting for experts to tell us what to do. Experts are great, but how do you know if they're right unless you've tried on your own?
I encourage everyone to try to dig a little deeper into technology. Don't let anyone tell you that you don't need to understand something and that it will all be handled by "top men", especially in these BYOD days! What you don't know can be used to exploit you in so may ways. Bad guys use it to steal your information and resources. Employers use it to make you give up your Facebook information and spy on your personal computers and phones. Governments and commerical interest use it to accumulate information about you and game you. I don't mean to be alarmist and I think that much of this is done with good intentions... but you can't defend yourself or make your own decisions unless you engage a little.
Technology is our servant. We should all be able to take advantage of mainframes or keep our email safe from bad guys. Solutions are there for the using, but we have to be curious and we have to not take "No" for an answer. Go do a search right now for a technical topic that you don't but would like to understand. The first two or three things may be way over your head, but you will ifnd something that introduces it to you correctly. (Don't be surprised if some of the better ones are on developerWorks.) Dig, learn, play, ask questions, get answers. You will be amazed at what you can find and do.
Halloween comes only once a year, but you can carry it in your heart all year round. I've always enjoyed the spooky stuff and there are those who really go all out to celebrate that season. As it turns out, people are applying a lot of do-it-yourself technology to make their own spooky effects. Some are just front port surprises. Others are in very professional haunted attractions. Unless you are extremely industrious and don't need sleep it's probably too late for you to do much with these ideas, but it's never too early to start for next year.
I'm actually thinking about doing something with web cams and a laptop. It could be fun!
Mirror, mirror on the wall...
Mirrors have long played a part in horror tales. Here is a genius way to create your own ghostly mirror effect. With LED monitors as cheap as they are now this is actually not too difficult to manage. Here is video and then a link to the instructions.
One of my most vivid memories from Walt Disney's Haunted Mansion is the singing statues. This enterprising individual has figured out a way to recreate this effect on his porch activated through an arduino circuit!
Of course, not all of these things are incredibly high tech. You could probably accomplish this one over the weekend with a few parts from the discount store. It shows how to set up a poor-person's gobo (a light with a shape in it) using a pen light, a cheap compact fromt he makeup department and some clip art. How simple is that? Of course it would be easy enough to make the light something that was not battery operated and even control it through some switching. The mirror technique is handy, though. I'd always seen this done through lensing.
Projection Mapping with 3D Tracing
Of course, if you have the means to be a more sophisticated projection you can use techniques similar to those used by the Bates Haunt. He talks about Photoshop, but it would be perfectly simple to use open-source GIMP instead.
Share your finds and projects
That's all I have time to share today. If you know of others, post a link in the comments. Please keep them work appropriate and do-it-yourself.
cmw.osdude 120000QT77 Tags:  nanophotonics ibm future integrated ic chip electronic-photonic 5,346 Views
As you probably remember, I work for IBM. We're a big company and a lot of my work, while satisfying on many levels, is... well... work! I probably deal with many of the same mysteries of the corporate world that you do. From time to time I wonder if I know the people being satirized in Dilbert. After a while it can feel like it's all about the problems and the products and the process of keeping everying going from day to day to day.
Then, some days I get a reminder of how thrilling it is to be around when someone creates the future. We saw some of that when Watson cleaned house playing against two strong apponents on Jeapardy. (See my blog entry at that time to watch it if you missed it.)
Today I find this story: IBM creates first cheap, commercially viable, electronic-photonic integrated chip
This breakthrough will make for faster and faster communication for electronics, and the technique appears to be affordable enough to actually use. That means more computing power available for media, smartphones, tablets, televisions and everything else which might house computing power. This breakthrough will permit the pushing of much more information at unimagined speeds–terabits per second!
It's unclear as to when this will make its way into marketed products, but it is a huge game-changer. I can't wait to see what happens next!
cmw.osdude 120000QT77 Tags:  blender upgrade congratulations 3d ibm community lotus connections video editing developerworks 5,307 Views
For the past few days when I went to look at my blog I was greeted with a message telling me that the software was being updated. Actually, since I'm on the inside of IBM, I knew that was going to happen and was not surprised, though I was a little impatient.
The developerWorks community runs on IBM Connections, formerly Lotus Connections, which is an application to design your own community site with, well, all the stuff that's here. (Is it just me or does IBM seem to have a lot of things that were "formerly known as..."?) The down time was to process an upgrade to the latest version of Connections. That is a massive undertaking, much like moving Joomla from 1.x to 2.x.
I'm really intrigued to see how these updates affect things. The previous site had a good deal of customization to make up for the demands of a public-facing community and the unique needs of developerWorks. As everyone who develops and integrates knows, those kind of customizations can create a lot of pain when you move into updates. It's one of the dangers of customizing, but sometimes you need to do it anyway to get what you need.
So far the site seems a lot more efficient, which is good. The editor I'm using to write this is much better than the previous one. We'll see how it goes.
Congratulations to the team who made this transition happen. You made it look easy, even though I know it wasn't.
Favorite free video editors
I'm working on a project to help people learn about doing video blogging and such. Because it's the way I am, I'm encouraging do-it-yourself (DIY) techniques which includes free and open software. On LInux I tend to use Cinelerra for my editing, though I've recently been playing with OpenShot and even Blender. Unfortunately, of those three only Blender is multi-platform. The others are currently Linux-only. (You can get a live CD/DVD which boots Linux with the software for editing, but that's suboptimal for most people.) avidemux is multi-platform, but I haven't really used it. Seems good for some general cleaning and trimming but doesn't have anything I've seen in the way of multi-track editing. I've also noted that YouTube now has a video editor, which has a similar philosophy to avidemux.
In general, I suppose I would point a complete novice who just wanted to cut out the whoopsies to the YouTube editor. But I'm curious about what others have found. Please don't bother with commercial software. It's not hard to find things to buy. It's trickier to find ways to learn.
Blender is awesome
I've been working with Blender a lot more to do some title kinds of work. I haven't done too much with it, but it's extremely powerful once you build the skills. I've been working on adding titles similar to how they did it in the series Fringe-- live 3D elements that are part of the setting. Here's one that I managed.
Some time back-- actually quite a while back-- I wrote a series of articles called the Windows to Linux roadmap. Now that I'm editor of the Linux site on developerWorks, I have to look at these things from a different perspective and it is bittersweet to watch them age. Ubuntu wasn't around at that time, which is my primary environment now. There are also tools that have come along to make management easier when, at the time, Webmin was really the only consistent tool I could find. (Webmin is still around, by the way, and I still might consider it if I was managing servers and needed to help share management with people who didn't have a strong Linux background.)
One of the articles I was looking over today was the one on doing backups. In 2003 the backup landscape was pretty dismal, at least from where I could see it. Were I to write that article today I would have more tools to discuss, my favorite being rsync. Rsync was actually around when I wrote the articles, but it was one of those resources that lurked in the shadows, like so many little tools do. Essentially rsync is designed to do file duplication, but tries to make it as efficient as possible by only transfering the delta (changes) in files when it can. It has a number of options and can be set up to do transfers through the network and over encrypted tunnels if desired. I wrote a little script that I run manually whenever I wish to do a backup... though I could run it automatically if I chose... and probably should.
This does a backup to my local USB drive and also does a dump to a network machine, through an encrypted tunnel. This device could be anywhere as long as I could access it over the network, and you'll notice that I am accessing it through an Internet address, so it works when I'm on the road as well. Note also that I'm doing key-based authentication in ssh.
The --exclude-from parameter lets me set up a file containing paths (with wild cards) that I do not want to back up. Things like the Trash, cache files, etc.
The first backup is a bear because it has to transfer all of the data. After that it's easier because it only addresses changes. Of course, one problem with this is that it doesn't take into account file deletions. rsync can do that, but I found that defeated the purpose of the backup if I was trying to recover files that I'd delted accidentally. So, I set up another script that I call cya-purge.sh, that handles that sort of clean-up. I run it periodically, when I'm pretty sure that I don't have something I need to restore.
This second script is identical, except for the --delete parameter, which tells rsync to remove files that are no longer on my system.
I agree that my solution is somewhat inelegant, and probably more hands-on than many people would prefer their backup to be. However, at the time that's really what I was looking for and I still enjoy doing it this way. I have a lot of granular control over this and don't have to mess with interfaces or anything like that. It's simple.
Of course, my hairy-man approach to backups is not going to be to most people's taste. For them/you there is duplicity, an elegant front end to working with rsync that handles bundling of files into smaller chunks, suitable for storing on remote networks. It also does management of the the backup to keep files around for a period of time and then allow them to leave gracefully... something that I would like to get my own scripts to do when I have time to wrap my brain around it. Duplicity is the default backup solution in Ubuntu, so if you have that turned on, you are using it!
My first experience with duplicity was not great. It spent a few hours doing a full backup of my user directory (gigs and gigs of data) and then deleted it when it was done. I never did figure out why it was doing that. However, when I recently tried it again through the Ubuntu control panel it seemed to work fine. I would need to do some tinkering to see how best to emulate my current system of dual backup-ups to a local and remote device, but it might be worth the trouble. I am amused to see that when I looked at the settings to refresh my memory that the automatic backup for today has already occured, and that I did not notice. That's a good sign!
Of course, there are a number of backup solutions that have evolved over the last nine years or so since I penned-- or shoudl I say keyboarded-- that article. Notable ones are Bacula, fwbackups and Amanda. At some point I may dig into them a little more, but in the mean time you will probably enjoy what you can do with rsync. I should point out that there are ways to use rcync in Windows as well. Take a look at this article if you want to explore that.
More and more people are living portion of their life in the virtual world of the Internet. (Some, a very large portion. Please don't forget your hygiene.) Not surprisingly, this is creating an atmosphere where people want to exert controls in the community to protect children, promote the public welfare and all of those other things that people say they want to achieve when they want to control things. Here is an example of the kinds of things that people are exploring: "China, Russia and Other Countries Submit the Document of International Code of Conduct for Information Security to the United Nations"
I suppose that my first question is "Can they do it?" I know that there will be laws and law enforcement. Some people could be hit really hard by this if they want to take out the "big guns". However, there is so much that happens on the Internet that is driven by private providers and personal devices. Sure, they have rules about the telephone lines and the airwaves, but technical people find ways around existing limitations all the time. If we really don't want it, could they really squash things?
How much control should the government have over what is done through the Internet? and who's government? We already seem to have a lot of trouble with isolating information because what is taboo in one area of the world is accepted in another. In the past, geographic isolation shielded people from things that they weren't supposed to know... but that is no longer the case. Does the Internet go beyond any single governing body? Can it be addressed directly by the people who use it?
I really don't have answers to these questions. They are things I am pondering.
Vote now? I thought the election was over!
There might be a test of some of these questions happening right now. If you are a user of Facebook you may have heard that they are looking to make some significant changes to their rules. I won't express my opinion on the changes because that's not the point here. After a lot of discussion on the matter, Facebook appears to be opening this up to a vote. If 30% of the users participate in this week-long vote it will be a binding decisio. If fewer than 30% participate, then it will be advisory.
Let's look at the numbers for a moment, though. Facebook claims to have 1.01 billion users in September of 2012. 30% of that number is 300 million people. In 2011 the population of the entire United States was estimated to be about 311 million people. We only get about 60% of them to vote in a presidential election! Of course, this is global, which has a higher population, but the numbers are still staggering. As of this righting the voting hasn't even reached 1%. (In a presidential election they describe numbers like that as "wasting your vote".)
Other thoughts come up. There seems to be some fuzziness of what is considered the real population. Are they going to do 30% of so-called active users or of the total population? I know of a few dead people who still have Facebook accounts because they were never removed. Do dead people count as part of the voting population? How could that possibly be determined? There are also many people who have accounts that they created long ago (or were created for them by enthusiastic people) who just never got into Facebook. Their account is just a table in a database somewhere with no participation and no stake in the changes. Are they counted in the voting population as well?
My prediction is that the numbers will fall far below the needed amount. I will be surprised if there is even a 1% turnout in this election. Right now the vote is overwhelmingly in favor of keeping the rules as they are. We'll see how that progresses. If the vote is advisory, I would not expect Facebook to give it much weight. They've clearly decided that these changes are beneficial (to someone) and I would expect them to say "Thanks for the advice" and then move forward. This would be disappointing if the vote was largely against it.
If you want to participate in this experiment, the link is open until the 10th of December: Facebook Governance Vote. Bring a few million of your friends
What about the rest of it?
Obviously, there is a lot of Internet outside of Facebook (which is probably a good thing). The real question is not about voluntary online services but things like putting up web sites, doing commerce, using the Internet for your own things in your own ways. Do we need a list of regulations deciding what may and may not be done, or do we need ways to better self select and self-isolate as we choose. I think that these kinds of technology are becoming more fundamental to how we do things. Knowing how to search for what you want on the Internet may be right up there with being able to balance a checkbook as far as life skills are concerned. Being able to keep bad guys from your information is on par with knowing whether or not you might be in a dangerous parking lot. How do we promote informed usage and a little self-reliance. Am I just overly skilled here and don't appreciate how hard it all is? I honestly feel like there are oceans of information about these technologies that I don't understand, but I don't feel lost.
Ultimately, someone will start pushing on all of our uses of the Internet because something will make them uncomfortable... and some people react to discomfort by trying to eradicate what troubles them. When people are largely alike this works. When you have global diversity it's going to be pretty tricky. I don't know where it's headed.
But... again... does it matter? Can they make all the rules they want with fancy kill switches and task forces and scrutiny and it will never be enough to catch up with what people want to do? Does technology become the real equalizer? the real global world?
All I can say, kids, is it's going to get weirder before it gets better.
cmw.osdude 120000QT77 Tags:  new enabling on imaging brief non-profits with source. open clonezilla some notes 5,303 Views
My next proper entry in this series of posts (am I doing a series now? How utterly corporate!) on what I've been doing with open source in the small non-profit office of a church will continue with some of the other tools and techniques that we've used behind the scenes. However, my last one actually prompted some questions, so I thought that I'd take a few minutes to address those before preparing for the holiday coming up. (Why do holidays always end up giving us more work?)
I was generally pleased with what I could do with Clonezilla. There is some good documentation available on the project web site, but I also found a good tutorial that gives a step-by-step demonstration of what it is like to work with Clonezilla. Within the project web site you'll find all the information that I used to create a bootable USB key that contained the full Clonezilla as well as the base image.
Later on, I'm hoping to use those same ideas to make one that will be automated... boot from that key and it re-images the C-drive, no questions asked... and some PXE booting that I'll combine with the wake-on-lan to automate updates we may do down the road. These are pretty techie projects and not designed for someone who is not an expert, but that's really the point. I'm not really trying to eliminate the expert. I think that environments running without an expert available deteriorate into chaos because no one pays attention to the warnings of disaster. Rather, I'm trying to use these tools to make it easier for an expert to volunteer and help out without having to go "in house." A team of technical people can cooperate to keep things running reasonably smoothly doing a little here and there within their schedules.
Other Open Source Solutions
Someone asked about open source solutions for things such as Point of Sale. This, and other areas such as CRM, are important applications that need to work well for an organization. I am absolutely not saying that open source solutions don't work well. I'm just suggesting that when you select a solution in these areas that it is a much larger commitment than a web browser or word processor. Your business data and practices will be wrapped around this choice. It's not a choice to be entered into lightly.
I'm a huge fan of open source solutions. I generally use all open-source for solving my own problems. However, there are times when I hesitate to recommend that course to others. Why? Well, I'm a techno-geek. I love to play with these things and explore them and make them work. Decades of that attitude have made me so that I'm pretty flexible and adaptable and don't let things like mysterious error messages stand between me and my work. However, I deal with users-- volunteer users who could be at home doing things with their families-- who experience a sort of panic when technical things happen. They may want to have someone to call to answer questions. That person may become me, and that can steamroll into a lot of volunteer time rather than the time I am paid for. That's the sort of scenario that drives volunteer technical expertise away. If the open source option is not stable, or is vastly different from what people might be used to in their commercial background, then I would hesitate.
Another issue which can occur with these situations is there may be some specific business requirements, maybe not within the organization, but in their interaction with other organizations that demand certain protocols or formats. This used to be a big issue with documents going to printers. They needed it in specific file formats, which were generally tool-driven. Much of that has changed over time with things like the openness of PDF, which provides a neutral format which many tools can produce. You will need to what the real business needs are for your organization and make sure that there are not critical business functions which demand specific commercial tools. As much as we might like to keep things "free and easy" these requirements may trump the open ideals. The good news, is that good application of open solutions where appropriate will often free up funds which can be applied to these commercial tools. Continued evolution of open source may mean that "no" really means "not right now." It may be that down the line that interoperability standards put an open source solution back on the table. Keep an open eye and an open mind.
Having commercial software as a part of your environment doesn't necessarily crush the work that you've done with open source there. You should try to select solutions that fit with the environment that you want to create rather than allowing them to dictate the environment that you must have. I like the stability of Linux running on servers in our environment. (More than 3 years without a single server issue and 99% of the support done completely remotely.) I resist solutions could not include a Linux server solution. It's always a battle, because most people who suggest solutions have a pretty narrow view of operating environments. However, in each case we've been able to find an approach which gave everyone what they needed.
I will say that one of the things that I like about IBM commercial solutions (and those of our partners) is that they tend to exist in a multi-platform world. If you can buy (or get donated) an IBM-based commercial solution then it will likely fit into an environment that has a lot of open source and play nicely. (That has been getting better and better over the last ten years and continues to improve.)
Let's say, though, that you've considered your options. You have determined that a commercial answer is not needed, or not available because of resource. You're going to go open-source. There is not a magic way to find the right package. Quite honestly, the way that I begin such a search is go to Google and enter "open source XXXX" where "XXXX" is the function that I'm looking for. The most popular projects for that solution will come up on top of the search. Then I dig through them. I look at the features and functions. I look through the forums for complaints. Lots of discussion entries that say things like "I reported this bug six months ago! When is it going to get fixed?" are dead give-aways. I tend to favor applications that run in Linux, but could run in Windows or Mac. Those seem to me to be the best thought out and working with the most stable technologies. I look for things that use open data formats like XML and standard SQL servers. If it seems to have some proprietary approach to holding the information then I tend to back away. I may need to rescue some of this information at a later time. I want a level of transparency. If you establish your criteria, test for your requirements and are willing to walk away from applications that don't fit then you should be abe to find good fits.
So, why am I not providing a list of recommended applications? Well, I think that this is an area where there is a lot of subjectivity. My perspective of which one is better doesn't really matter because I'm not really using some of them right now. If I deploy one to solve a problem I'll write about it, but until then they are all candidates. I would rather talk about what did work for me rather than what should work for you.
One thing I will say is that at this stage in open source evolution, you take on a level of risk when you invoke an open source solution. It's less from the software itself which, as I said before, tends to be pretty good. The risk is more from the politics and societal view of open source. I think I said before that in my project there are knowledgeable people who seem to withold their Windows expertise on the workstations because we won't run Windows servers. Non-profit environments can get pretty political and you may need to exert a good deal of leadership to make these things work. Some of the answers are not obvious. I highly recommend that you associate yourself, either in person, or virtually in some community areas. There are two good groups right here on My developerWorks:
IBM and Open Source, Open Standards, Open Computing - people who work with IBM environments and open source.
Real world open source - a group started by Yours Truly to help gather people who are trying to make this stuff work.
The level of access for a group can be set to open (anyone can see and join), moderated (anyone can see, but you must ask to join), or private (only visible to members). You can amend the access level of a group at any time using the Edit Group form.
You can choose to associate applications with a group, such as wikis, if they are available for your deployment. When you do this, a link and a feed to the parent wiki page are provided in the group's overview page.
When you create a group, you can associate a specific image with the group. The image you choose should be closely associated with the group's identity. You can change the image at a later stage using the Edit Group form.
I'm on vacation for a bit, but when I return we'll get back to the church office and how we're using wakeonlan, ssh and vnc to remotely support workstations.