Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
I'll love to hear from you (I post letters from authors!) about how you put the blook together. Many folks have used cut and paste from blog page into word processor. Others have simply backed up their blogs, then cut and pasted. Some folks had the foresight to compose their posts in a word processor before posting!
Anyway, I'd like to know whatever ins and outs you'd like to share. Thanks.
Well Cheryl, I couldn't find any email address to send you a response, so Idecided to post here instead and post a traceback on your blog.
Software: Office 2003 version of Microsoft Word on Windows XP system
Front matter: Title, Copyright, Dedication, Table of Contents, Foreword, Introduction
Back matter: Blog Roll, Blogging Guidelines, Glossary, Reference table, What people have written about me and my blog
According to Lulu, you could use OpenOffice instead with RTF files. I didn't try that. I did tryusing CutePDF to upload ready-made PDFs, that didn't work. I also tried saving text in PDF formaton my Mac Mini running OS X 10.4 Tiger, but Lulu didn't like that either.IBM now offers a free download of [LotusSymphony] that might be an alternative for my next book.
For my blook, the "Blog Roll" serves instead of a more formal [Bibliography]. I could have also includedonline magazines and other web resources.
Decision 2: Chapter Configuration
I reviewed other blooks to see how they were organized. I thought I might organize the blog posts by topic or category, but all the blooks I looked atwere strictly chronological, oldest post first. This of course is exactly opposite as theyappear on the web browser. I decided to keep things simple, with just 12 chapters, one for each calendar month.
Each chapter was separated by a section break with unique footers, starting on odd page number. The footers have the page numbers on the outside edges, so that even pages had numbers on the left, and odd pages on the right. I also added the name of the chapter and the book, like so:
--------------------------------| |---------------------------- 40 ................December 2006| |Inside System Storage.... 41
This was a lot of work, but makes the book look more "professional".
Decision 3: Cut-and-Paste
People have asked me why it took three months to put my blook together, and I explainedthat the cut-and-paste process was manually intensive. My posts are either HTML entereddirectly into Roller webLogger, or typed in HTML on Windows Notepad and cut-and-pastedover to Roller later. I have access to the HTML source of each post, as wellas how it appears on the webpage, and tried cut-and-paste both ways. Copying theHTML source meant having to edit out all the HTML tags. I hadn't even looked into the idea of "backing up" through Roller all the entries, but they would probably have been HTMLsource as well.
In turned out that copying the webpage directly from the browser was better, which retains more of the formatting,and automatically eliminates all of the pesky HTML tags. I wanted the printed versions to resemblethe web page version.
Microsoft Word indicates all hyperlinks as bright blue underlined text which I didn't like, so I removedall hyperlinks, to avoid having to pay extra for "colored pages". This can be done manually, one by one, or pasting with the "text only" option butthis removes out all the other formatting as well. (Specifying black-and-white interior on Lulu might have converted all of these automaticallyto greyscale, so I might have been safe to leave them in,which I probably could have done if I wanted an online e-book version with links active, ... oh well)
To indicate where the hyperlinks would have been, I wrapped all the linked text in[square brackets]. I have now gotten in the habit of doing this for future blog posts, soif I ever make another book, it will cut down the work and effort on the cut-and-paste.
Some of the items I linked to posed a problem. I had to convert YouTube videos to flat imagesof the first frame to include them into the book. Older links were broken, and I had tofind the original graphics. I also sent a note to Scott Adams related about the use of one of his Dilbert cartoons.
I decided to also cut-and-paste my technorati tags and comments. For comments I mademyself, I labeled them "Addition" or "Response". A few people did not realize thatI was "az990tony" making the comments as the blog author, so I changed all to say "az990tony (Tony Pearson)" to make this more clear, and now do this on all future blogposts to minimize the work for my next book.
Because I used a lot of technical terms and acronyms, Microsoft Word actually gave mean error message that there were so many gramattical and spelling errors that it wasunable to track them all, and would no longer put wavy green or red lines underneath.
I did all the cut-and-paste work myself, but since the website is publicly accessible,I could have gotten someone else to do this for me.Had I read Timothy Ferriss' book The Four Hour Work Week sooner,I might have taken his advice on [Outsourcing the project to someone in India]. I might consider doing this for my next book.
Decision 4: Numbering the Posts
I decided I wanted to standardize the title of each post. The date was not uniqueenough, as there were days that I made multiple posts. So, I decided to assign eacha unique number, from 001 to 165, like so:
2006 Dec 12 - The Dilemma over future storage formats (033)
Posts that referred back to one of my earlier posts within the book had (#nnn) added so that readers couldgo jump back to them if they were interested. This eliminated trying to keep track of pagenumbers.
Decision 5: Adding behind-the-scenes commentary
One of the reasons I rent or buy DVDs is for the director's audio commentary and deleted scenes. These extras provided that added-value over what I saw in the movietheatre. Likewise, 80 percent of a blook is already out in the public for reading, so I felt I needed to provide some added value. At the beginning of each month, I describewhat is going on behind the scenes, and then in front of specific posts, I providedadditional context. This could be context of what was going on in the blogosphere at thetime, announcements or acquisitions that happened, what country I was blogging from, orwhat unannounced products or projects that were being developed that I can now talk aboutsince they are now announced and available.
To distinguish these side comments from the rest of the blog posts,I decorated them with graphics. Searching for copyright-free/royalty-free clip-art, graphics, and photos that represented eachconcept was time-consuming. I shrunk each down to about 1 inch square in size, and changed themfrom color to greyscale. (LuLu conversion to PDF probably would have automaticallyconverted the color graphics to greyscale for me, in which case leaving them in full colormight have been nice for an e-book edition, ... oh well)
I did complete each chapter one at a time. So, for each month, I cut-and-pasted all the blog posts,tags and comments, then fixed up and numbered all the post titles, then added all the behindthe scenes commentary, and cleaned up all the font styles and sizes. I recommend you do this at least for the first chapter, so you can get a good feel for what the finished version will look like.
Decision 6: Adding a Glossary
I sent early copies of the books to five of my coworkers knowledgeable about storage, andfive local friends who know nothing about storage.
Some of my early reviewers suggested having an index, so that people can find a specific poston a particular topic. Others suggested I spell out all the acronyms that appear everywhereand put that into the Reference section, rather than on each and every occurrence inthe book itself. Both were good ideas, and my IBM colleague Mike Stanek suggested calling ita GOAT (Glossary of Acronyms and Terms). Acronyms are spelled out, and terms or phrasesthat need additional explanation have a glossary definition. For eachitem, I put the post or posts that uses that term. Some terms are covered in dozens ofposts, so I tried to pick five or fewer posts representing the most pertinent.
The glossary was far more time-consuming than I first imagined, with over 50 pages containingover 900 entries. I struggled deciding which terms and acronyms needed explanation, and which were obvious enough. On the good side, itforced me to read and re-read the entire book cover to cover, and I caught a lot of othermistakes, misspellings, and formatting errors that way. Also, I have a large internationalreadership on my blog, so the glossary will help those whose English is not their native language,and will help those readers who are not necessarily experts in the storage industry.
Decision 7: Designing the Covers
Up to this point, I had been printing early drafts with simple solid color covers. Lulu hasthree choices for covers:
Just type in the text, upload an "author's photo" and chose a background color or pattern
Upload PNG files, one for the front cover, one for the back cover, and chose the textand color of the spine.
Upload a single one-piece PDF file that wraps around the entire book.
I had no software to generate the PDF for the third option, so I decided to try the secondoption. My first attempt was to format the front title page in WORD, capture the screen,convert to PNG and upload it as the front cover. I did same for the back cover, with a smallpicture of me and some paragraphs about the book.
I chose a simple straightforward title on purpose. Thousands of IBM and other IT marketing and technicalpeople will be ordering this book, and submitting their expenses for reimbursement as work-related, and didn't want to cause problems with a cute title like "An Engineer in Marketing La-La Land".
The next step was to use [the GIMP] GNU image manipulationprogram, similar to PhotoShop, to add a cream colored background, a slanted green spine, and some graphics that we had developed professionally for some of our IBM presentations.I learned how to use the GIMP when making tee-shirts and coffee mugs for our [Second Life] events, so I was already familiar. For newblook authors, I suggest they learn how to use this for their covers, or find someone who can do thisfor them.
I did the paperback version first, and once done, it was easy to use the same PNG files forthe dust jacket of the hardcover edition, adding some extra words for the front and back flaps.
The adage "Don't judge a book by its cover" seems to apply to everything except booksthemselves. The book cover is the first impression online, and in a bookstore. I have seenpeople pick books up off the shelf at my local Barnes & Noble, read the front and back covers, peruse the front and backflaps, and make a purchase decision without ever flipping a single page of the contents inside.From an article on Book Catcher [SELF-PUBLISHING BOOK PRODUCTION & MARKETING MISTAKES TO AVOID]:
According to selfpublishingresources website, three-fourths of 300 booksellers surveyed (half from independent bookstores and half from chains) identified the look and design of the book cover as the most important component of the entire book. All agreed that the jacket is the prime real estate for promoting a book.
While many struggle to find the right title and cover art, I think it is interesting that Lululets you post the same book with slightly different titles and covers, each as separate projects, and let market forces decide which one people like best. This is a common practice among marketresearch firms.
Decision 8: Finding someone to write the Foreword
With the book nearly done, I thought it would be a nice touch to have an IBM executive write a Foreword at the frontof the book. Several turned me down, so I am glad I found a prominent Worldwide IBM executiveto do it. I should have started this process sooner, as she wanted to read my book in its entirety beforeputting pen to paper. I had not planned for this. I was hoping to be done by end of October,but waiting for her to finish writing the Foreword added some extra weeks. Next time,I will start this process sooner.
Decision 9: Printing Early Drafts
You need to have Lulu print at least one copy to review before making it available to the public,and it doesn't hurt to order a few intermediary draft copies to make sure everything looks right.However, from the time I order it on Lulu, to the time it is in my hands, is over two weeks withstandard shipping, so I needed a way to print drafts to look at in between.
To avoid wear-and-tear on my color ink-jet printer, I went and bought a large black-and-white[Brother HL-5250DN] laser printer. Rather than buying specialty 6x9 paper, I used standard 8.5x11 paperusing the following 2-up duplex method:
Upload the DOC file to Lulu, and get it converted to PDF
Download the resulting PDF from Lulu back to your computer
View the PDF in Adobe Reader, and print it using 2-up "Booklet" mode.
For example, if you print 60 pages in booklet mode, it prints two mini-pages on thefront side, and two more mini-pages on the back side of each sheet of paper, resulting in 15 standard 8.5" x 11" pages that can be folded, stapled, and read like a mini-booklet. My entire blook could be printed on seven of these mini-booklets, saving paper, and giving me a close approximation to what the final book would look like. Eachmini-page is 5.5"x8.5", so just slightly smaller than the final 6"x9" form factor.I fount that 60 pages/15 sheets was about the maximum before it becomes hard to fold in half.
So, if I had to do it all over again, I might have chosen 11pt Garamond (the default), or changedthe default to 11pt Book Antiqua up front, so as not to have spend so much time converting thefonts. I might have left out the glossary. I might have left in all the hyperlinks and graphicsin full color for a separate e-book edition. And I definitely would have looked for an author formy Foreword much earlier in the process.
I didn't plan to write a blook when I started blogging. I have started putting [square brackets]around all my links. I have started putting "az990tony (Tony Pearson)" on all my comments. I hadassumed that people were jumping to all the links I provided in context, but I learned that the blogpost has to stand on its own, so now I make sure that I either paraphrase the important parts, oractually quote the text that I feel is important, so that the blog post makes sense on its own.This is perhaps good advice in general, but even more important if you plan to write a blook later.
Lastly, I decided up front to write blog posts that were 500-700 words long, about the average lengthof magazine or newspaper articles. In my blook, the average is 639 words per post, so I hit thatgoal. I have seen some blogs where each post is just a few sentences. Maybe they are posting fromtheir cell phone, or don't have time to think out a full thought, but who wants to read a year'sworth of [twitter] entries.
Well Cheryl, I hope that helps. If you need anymore, click on the "email" box on the right panel.
IBM doesn't publicly report subset numbers on individual product lines, but we are growing, albeit single-digit growth, on the high-end with our IBM System Storage DS8000 and DS6000 series products. Single digit growth is not "booming", but it is what we expected in this space, so it is not like we are"feeling the chill" as Robin stated.Obviously, if the U.S. market overall is doing poorly, then it must be from something else. IBM's success appears to be from organic growth in our Asia and Europe markets, and taking marketshare away from the top two contenders, EMC and HDS. Here are my thoughts why:
EMC is remodeling its kitchen
Not happy with its status as #1 disk hardware specialty shop, EMC is admirably trying to redefine itself as an ["information infrastructure"] company, buying up software companies and introducing new storage services. [Byte and Switch] reports onEMC's recent acquisitions:
EMC is the latest vendor to pin its colors to the SaaS mast, revealing its plan to offer SaaS-based archiving services during its recent Innovation Day in Boston.
EMC gave another clear indication of its SaaS intentions last month, when it spent $76 million to acquire online backup specialist Mozy.
IBM has offered[Managed Storage Services] foryears through our Global Technology Services (GTS) division. Gartner recognized IBM as the #1 leader in storageservices, with three times more revenues than EMC in this space.
As with a restaurant that is remodeling its kitchen, it can expect a temporary drop inrevenue. If it is done right, customers will come back to a bigger brighter restaurant. If not, the restaurant re-opens as a much smaller lesser version of itself. Recent events this year might incent EMC to get that kitchen done quickly:
A recent [class-action lawsuit]might result in having EMC's "86 percent male" sales force goes to sexual harassment sensitivity training, takingtime away from selling high-end storage arrays in the field. Analysts consider "high-end" boxes as those costingover $300,000 US dollars. Because of the money involved, there is a lot of competition for high-end storage, so face-to-face time with prospective customers is crucial to making the sale.Anytime any vendor is mentioned in a lawsuit (andcertainly IBM has had its share in the past, as Chuck Hollis correctly points out in the comment below), priorities get shifted, and there is potential dip in revenues.
Dell acquires EMC's rival EqualLogic. Dell resold EMC midrange storage, like CLARiiON, so this should notimpact their high-end storage sales. While Dell will be allowed to sell EMC until 2011, this new acquisition mightmean Dell leads with the EqualLogic offerings, and that could potentially reduce EMC revenues in the midrange space.
IBM went through a similar phase in the 1990's, redefining itself from an "IT Technology" company, intoa "Systems, Software and Services" company. These transitions can't be done in a quarter, or even a year, theytake several years. IBM lost business to EMC in the 1990s, but is back with a stronger portfolio in the 2000's, and so IBM's kitchen remodeling effort appears to be paying off. We will see what happens with EMC in a few years.
HDS puts on the white lab coats
Meanwhile, HDS appears interested in taking over as #1 disk hardware specialty shop.For years, Hitachi was the stereotypical JCM (Japanese IBM-compatible manufacturer) that made well-engineered"me, too" storage arrays. They would see what innovators like IBM and EMC were doing, and copy them. Recently,however, they seemed to have changed strategy, introducing new featuresand functions on their high-end USP-V device, like[Dynamic Provisioning].
The problem is that customers don't want to feel like [Guinea pigs] in an experimental lab, especially withmission-critical data that they trust to their most-available, most-reliable high-end disk storage systems.Like IBM and EMC and the rest of the major storage vendors, Hitachi has top-notch engineers making quality products, but new features scare people, and so there is a lag in the adoption of new technologies.
In our youth, we might have preferred beer with recent born-on dates, and tequila aged less than 90 days. But as weget older, we switch to drinks like wine and whiskey, aged years, not weeks. The same is true for themarketplace. New start-ups and other "early adopters"might be willing to try fresh new features and functions on their storage systems, but more established enterprises prefer storage with more mature and stable microcode.Storage admins want to leave at the end of the day, knowing that the data will still be there the next morning. In tough financial times, many established companies want the technological equivalent to ["comfort food"], nothing spicy or exotic, but simplehearty fare that fills the belly and keeps you satisfied.
Recognizing this, IBM often introduces new features and functions on its midrange lines first, and position them accordingly. Once customers are comfortable with the concepts, IBM then can consider moving them into the high-end lines. For example, dynamic volume expansion was introduced on the DS4000 and SAN Volume Controller first, and once proven safe and effective, brought over to the DS8000 series. This strategy has served us well.
Well those are my theories. If you have a different explanation of why storage vendors are not doing well in thehigh-end, drop me a comment!
Continuing my business trip through Canada, an article by Richard Blackwell titled [The Double Bottom Line] yesterday's Globe and Mail newspaper caught my attention.Here is an excerpt, citing Tim Brodhead, president of the J.W. McConnell Family Foundation in Montreal:
The bottom line for any business is making a profit, right?
But how about considering a different, or additional bottom line: helping make the world a better place to live in.
That's the radical proposition underlying the concept of "social entrepreneurship," the harnessing of business skills for the benefit of the disadvantaged.
Young investors, in particular, now want their investments to produce both financial and social returns, he noted.
Until recently, "we could either make a donation [to a charity] and get zero financial return, or we could invest and get zero social return." People now want more of both, but rules governing charities and business make that tough to accomplish.
One stumbling block is the imperative - entrenched in corporate law - that managers and directors of for-profit companies have a fiduciary duty to maximize profits. That structure is a brick wall that limits the expansion of social entrepreneurship, Mr. Brodhead said.
Some companies have embraced the new paradigm of a double bottom line, even if they are uncomfortable with the "social entrepreneur" label.
This fiduciary duty to maximize profits is discussed in the 2003 documentary[Corporation]. However, some organizations are now trying to aligntheir goals, finding ways to benefit their investers, as well as society overall. For example, organization [ONE.org] helped launch [Product (RED)]:
If you buy a (RED) product from GAP, Motorola, Armani, Converse or Apple, they will give up to 50% of their profit to buy AIDS drugs for mothers and children in Africa. (RED) is the consumer battalion gathering in the shopping malls. You buy the jeans, phones, iPods, shoes, sunglasses, and someone - somebody’s mother, father, daughter or son - will live instead of dying in the poorest part of the world. It’s a different kind of fashion statement.
The company, which has operated in Africa for nearly six decades, expects to increase its investment by more than $US120 million (more than R820 million) over the next two years. In the coming year, IBM expects to hire up to 100 students from Sub-Saharan universities to meet the growing demand in services, global delivery and software development.
"The Sub-Saharan African market is poised for double-digit growth flowing from the development and expansion of telecommunications networks, power grids and transport infrastructure," said Mark Harris, Managing Director, IBM South and Central Africa. "Private and public sector investment in the region is transforming the ability of the market to participate in the global economy."
A recent IBM Global Innovation Outlook (GIO) [report on Africa] indicates that the economies ofdozens of African nations are growing at healthy rates, the best in the past 30 years, with 5.5 to 5.8 percent averageacross the continent. This supports last month's news that [Top IBM thinkers to mentor African students]:
Hundreds of IBM scientists and researchers will mentor college students in Africa. Called Makocha Minds (after the Swahili word for "teacher"), the program will reach hundreds of computer science, engineering and mathematics students.
Makocha Minds is an off-shoot of IBM’s Global Innovation Outlook, an annual symposium of top government, business and academic leaders that uncovers new opportunities for business and societal innovation. "African students need to be trained in entrepreneurship so that they get out there and not just make jobs for themselves but create opportunities to employ others as well,” said Athman Fadhili, a graduate student at the University of Nairobi (Kenya).
Most of the mentoring will be via email and online collaboration.
Mentoring via email and online collaboration is very reasonable. I have mentored both high school and collegestudents through a partnership between IBM Tucson and the Society of Hispanic Professional Engineers[SHPE]. While thekids were all located in Tucson, I rarely am, traveling nearly every week, but I madetime for the kids via email and online collaboration wherever I happened to be.
To make this work, we need to get email and online collaboration in the hands who need them.I got my email thanking me for being a "first day donor" to the One Laptop Per Child "Give 1 Get 1" (G1G1) project,and have added this "badge" to the right panel of my blog. If you click on the badge, you will be takento a series of YouTube videos that further describe the project.
According to the email my donated XO laptop will soon be delivered into the hands of a child in Afghanistan, Cambodia, Haiti, Mongolia or Rwanda.
How do these work? Instead of buying your uncle yet another $25 necktie, consider buying a $25 Kiva certificate.The $25 dollar "micro loan" goes to someone in the third world to improve their situation, start a business, geta job, and so on, and you give your uncle a Kiva certificate so that he can track the progress. I think that isvery clever and innovative.
Registration for [IBM Pulse 2008] is now open! This is the first ever global conference to cover not just Tivoli Storage software, but also the rest of Tivoli portfolio,Maximo and Tivoli Netcool products, and disciplined service management and governance practices and procedures.
Join us on May 18-22 in Orlando, Florida. You'll learn how IBM service management solutions can give you the visibility needed to see all aspects of your business and manage it against objectives, control to secure assets, and automation to drive business agility for competitive advantage.
Leverage this opportunity to meet with fellow clients, IBM partners, industry analysts, and IBM experts in an environment dedicated to the latest technology, trends, and best practices in service management. Whether youl are in network and service operations, IT, the executive office, line of business or services sales, IBM Pulse offers keynote presentations, in-depth seminar sessions, exhibitions and hands-on labs.
But wait, there's more!
One-on-one meetings with IBM executives and industry experts
Presentations by more than 100 customers sharing their real-world experiences and lessons learned
An evening of "Speed Training" (a la [speed dating]) for technology consulting: Ask specific questions of our technical subject matter experts – and get answers instantly
I realize this conference is five months away, however one of my pet peeves is learning about a conference, especially a first-of-its-kind conference like this one, at the last minute, and not having time to plan accordingly. Travel budgets are tight for lots of people, so as an added incentive there is a $600 US dollar discount per person if you register before February 1, 2008. So don't wait! Sign up today!
Over 4,000 issues of your favorite magazine now sit, ready for you to search and savor, on an 80GB incredibly lightweight and travel-friendly drive. This high-performance, brushed-aluminum Hard Drive measures only 3x5-inch and can easily fit inside a purse or briefcase so show it off to your tech-savvy friends and co-workers. Plus, there is plenty of extra room on the drive for future updates. Simply install The Complete New Yorker Program (installation CD provided), then connect the drive to a USB port on your Computer and have instant access to every article, poem, short story, and cartoon including every advertisement that has appeared in the magazine since 1925.
System Requirements: Windows 2000 or XP, Mac OS X 10.3 or higher, USB 2.0 port, CD-ROM drive, 750 MB of free hard drive space, 1024 x 768 minimum screen Resolution
The 750MB of disk space required on your system probably contains the indexing/metadata search system to find articles by subject, title or author. Linux is not listed, and if 750MB of disk space are required to run the program, then perhaps this system won't work with Linux at all.
The system claims that there is extra room on the disk to ingest future issues of the magazine. I wonder why they didn't put the indexing/metadata search software on the drive itself, so that it would be self-contained, rather than having a separate installation CD.
I think this is a sign of our times. The New Yorker magazine has taken the archives that they keep anyways, and made them available in bulk, in a handy disk drive delivery system. I know several people who keep boxes and boxes of back issues of all kinds of magazines, and this certainly is an improvement.
[R&D Magazine] recently conducted a survey that prompted readers to identify the world's most successful Research and Development (R&D) companies. The results are in: IBM was recognized as the best R&D company in the world when several different categories were evaluated, including:
R&D spending as a percentage of revenue
the number of patents
new products in development
The survey considered additional information on more than 130 companies such as data on intellectual property, community service and financial growth trends. Readers were also asked five distinct questions, including the following:
Where would you like to work based on their R&D?
What companies have the most improved R&D in the past five years?
What companies are the leaders in R&D?
Which company's R&D has the strongest influence on society?
Which company's R&D is the most proactive in high tech challenges?
Since it is often 5-15 years between when a scientist in one of our many research labs comes up with a clever idea, to when it is a market success, it is good to have external recognition for the R&D efforts we are doing right now.Here is a link to a [four-page PDF] of the magazine article.
Take for example IBM's recent breakthrough in Silicon photonics. Supercomputers that consist of thousands of individual processing nodes, typically running Linux on dual-core or quad-core processors, connected by miles of copper wires could one day fit into a laptop PC. And while today’s supercomputers can use the equivalent energy required to power hundreds of homes, these future tiny supercomputers-on-a-chip would expend the energy of a light bulb, so this solution is more "green" for the environment.According to the [IBM Press Release]:
The breakthrough -- known in the industry as a silicon Mach-Zehnder electro-optic modulator -- performs the function of converting electrical signals into pulses of light. The IBM modulator is 100 to 1,000 times smaller in size compared to previously demonstrated modulators of its kind, paving the way for many such devices and eventually complete optical routing networks to be integrated onto a single chip. This could significantly reduce cost, energy and heat while increasing communications bandwidth between the cores more than a hundred times over wired chips.
“Work is underway within IBM and in the industry to pack many more computing cores on a single chip, but today’s on-chip communications technology would overheat and be far too slow to handle that increase in workload,” said Dr. T.C. Chen, vice president, Science and Technology, IBM Research. “What we have done is a significant step toward building a vastly smaller and more power-efficient way to connect those cores, in a way that nobody has done before.”
Today, one of the most advanced chips in the world -- IBM’s Cell processor which powers the Sony Playstation 3 -- contains nine cores on a single chip. The new technology aims to enable a power-efficient method to connect hundreds or thousands of cores together on a tiny chip by eliminating the wires required to connect them. Using light instead of wires to send information between the cores can be 100 times faster and use 10 times less power than wires.
The latest IBM Systems Journal has [fifteen articles about IBM Service Management], which includes the disciplines for managing storage resources as part of an overall IT data center.As with most journals, these articles are heavy academic efforts, not light summer reading.
However, since I have moved from marketing to consulting, I need to read these kinds of articles to keep up with the industry. I realize many people don't have time to read allof these, so over the next three days, I will give some quick highlights in hopefully more understandablelanguage. Here is what I got out of the first five articles:
An Overview of IBM Service Management
This 10-page article provides a good overview of what the other articles go into greater detail.The role of information has changed, from supporting back-office tasks like payroll andinventory, to enabling growth in the business itself, providing insight and competitive advantage. The challenges are summarized under "Four C's": Complexity, Change, Cost, and Compliance. The recommended approach is to engage with IBM,who has thousands of practitioners with years of experience in ITIL, eTOM, COBIT, CMMI and SOA.
Adding value to the IT organization with the Component Business Model
Many Service Level Agreements (SLAs) are loaded with technological jargon rather than concentratingon intended business results. CIOs must change this, and learn to run IT as a business witha service delivery focus.IBM Process Reference Model for IT (PRM-IT) is the foundationfor the Component Business Model for the Business of IT (CBMBoIT) that can assist with strategic decision making to transform IT into this new role.
An Integration model for organizing IT service management
There are so many ways to implement Information Technology Service Management (ITSM) that it is hard to tell if there are gaps or overlaps between products and offerings. A seamless solution requires common terminology and approaches. An integration model helps to bring all this together, focusedon being consistent with existing practice, with clarity of expression, and practical to implement.
IBM Service Management architecture
Today's systems management tools are fragmented by resource domain--servers are managed here, networksmanaged there, and storage is another story altogether. IBM Service Management intends to integratea portal-based User Interface, a process runtime layer, a configuration management database (CMDB), and all the various operational management products (OMPs) for each resource. For example, IBM TotalStorage ProductivityCenter is an OMP for IBM and non-IBM storage resources.
A configuration management database architecture in support of IBM Service Management
IBM Tivoli Change and Configuration Management Database (CCMDB) holds all the configuration data of IT resources in the data center, including individual "configuration items" (CIs), as well as tracks changes. The database is populated with data from different sources, includingautomatic discovery. Relationships between CIs provides a visual representation of application dependencies.The data model uses a clever combination of Unified Modeling Language (UML) with Java persistent objects.
I'm continuing my coverage of IBM Systems Journal's [fifteen articles about IBM Service Management].As storage hardware cost per GB declines 25 percent per year, the cost of labor has grown to nearly 70percent of the total IT budget. This brings new focus on how we do things, rather than what things siton the raised floor. Yesterday, my post summarized[the first five articles].Here is what I got out of the next five articles:
Integrated change and configuration management
IT Infrastructure Library (ITIL) best practice covers a variety of disciplines, including incident management,problem management, release management, service help desk, change management, and configuration management.IBM has combined the last two into a single database, and this paper provides insights gained fromimplementing these in practice. A special section talks about how service providers can support multipleclients that must be kept separate from each other.
The process of building a Process Manager: Architecture and design patterns
Business processes coordinate and sequence the work done by a collection of people.Most companies define their business process from scratch, and develop their own applicationsto support their implementation. Process Managers are "out of the box" applications that help customers integrateand automate more quickly than building from scratch. These Process Managers leverage and update informationabout configuration items (CIs) in the configuration management database (CMDB). One of the first developedby IBM was the IBM Tivoli Storage Process Manager.
Integration of domain-specific IT processes and tools in IBM Service Management
ITIL tells you what needs to get done, but it doesn't tell you exactly how to do it. Completing a simplechange request to the IT environment can have a drastic impact on service level agreements (SLAs), utilization of existing storage capacity, and business operations. Sometimes it is important to use multipleProcess Manager applications together. To accomplish this, it is important to launch and land in contextat the appropriate points for smooth transition.
Using a model-driven transformational approach and service-oriented architecture for service deliver management
Companies are considering outsourcing as a way to focus on core competencies. However, the trend is towardselective outsourcing, where the customer controls the IT solution architecture and retains their legacy tools.As a result, service providers inherit the business and IT processes from their clients. IBM Research has developed the model-driven business transformation (MDBT) method that choreographs workflow tools with humanactivities. A "balanced scorecard" allows both client and outsourcer monitor progress towards strategic goals.
Catalog-based service request management
Service providers (outsourcers) are able to bring the latest IT technology, best practices, and skilledservice delivery teams. Unfortunately, unique business processes from each client limits the ability to leveragethese resources effectively. A service delivery management platform (SDMP) catalog serves as a repositoryof atomic services and the delivery teams that can perform them. This allows outsourcers to leverage resourcesacross multiple clients, while still being able to tailor business compositions of these atomic services to an individual client's requirements.
From a technology-oriented to a service-oriented approach to IT management
Companies are challenged with shifting from a technology/resource-oriented to a service-oriented approach to IT management. This involves new processes, a new reportingstructure for the IT staff, new tools and technologies, and new data to be captured.A top-down approach is recommended for large organizations, but a bottom-up approachmight be easier to implement for small and medium sized businesses.
IT service management architecture and autonomic computing
IBM has been promoting the concept of Autonomic Computing since 2001. A self-managed resource can have an autonomic manager with sensor and effector. The sensor is used to monitor status, a knowledge basecan analyze and plan for appropriate modifications, and execute these through theeffector. The Autonomic Computing Reference Architecture (ACRA) aligns with the Information Technology Service Management (ITSM) model well, with the CMDB acting asthe knowledge base for the autonomic managers. See my earlier post[Self tuning guitars and storage].
Evolving standards for IT service management
Changes to the IT infrastructure must be closely managed to avoid disruptions.IT organizations recognize that standards-based solutions enable interoperability,with less risk, to connect internal and external applications. Standards can be formally developed by standards bodies like ISO, IETF, W3C, OASIS, and DMTF; or be de facto standards that become widely used by companies, which can then laterbe adopted by standards bodies. SML and SDD are emerging standards that are incompatible with the current set of Web Services-based protocols, like WSDM, but work isunderway to try to determine a unifying standard to support all of these under ITSM.
Prospects for simplifying ITSM-based management through self-managing resources
An ideal computing system would take over a great deal of its own management.Today's IT systems are brittle, difficult to understand, and dangerous to change.The savings from automating some tasks are dwarfed by the irreducible costs of humandecision making, agreements and approvals built in formal processes. A true self-managing, scalable IT system would consist of a number of nearly-identical boxes,with a web interface to define high-level policies and provide information on utilization and performance. As the system needs to expand, it can automatically place the order. When the new boxes arrive, they are placed and connectedinto the data center, and the system configures and provisions them appropriately.
IT Autopilot: A flexible IT service management and deliver platform for smalland medium business
Using an airplane analogy, the pilot performs manual steps to get the plane safelyoff the ground, then turns it over to the autopilot for normal operations. The ITAutopilot intends to do this for IT service management in small and medium business (SMB)that may not have a large dedicated IT staff, using an SOA approach that isloosely coupled, stateless, and adhering to Web Services standards. The IT Autopilotemploys workflow-based controls, the autonomic computing MAPE model, and customizedpolicies to address SMB requirements. It could be deployed as an appliance, similarto IBM System Storage Productivity Center.
While Bill Gates is personally benefiting from code he wrote 30 years ago, most software engineers don't getroyalties for their creative efforts. Robin Harris on StorageMojo has a great piece on [Why are the writers striking?]The writers in this case are those who write scripts for television programs. They get 4 cents for every$19.99 DVD sold today, and want this bumped up to 8 cents. More importantly, they want the same deal forcontent shown over the internet. Currently, they get nothing when content they wrote for is shown on the internet, and they would like that fixed also.
Paying royalties to creative writers encourages them to write good stuff. The best stuff will result in moreroyalties, and we want to encourage this. What about software engineers? Don't we want them to write the beststuff also? Shouldn't they get royalties too, not just a flat salary and continued employment?
Last year, I covered Chris Anderson's book [The Long Tail]. This year, Chris Anderson, editor-in-chief of Wired.com, has an upcoming book titled Free, the past and future of a radical price. Chris talked about his book here at Nokia World 2007 conference, and the [46-minute video] is worth watching.He asks the big question "What if certain resources were free?" This could be electricity, bandwidth, or storage capacity. He explores how this changes the world, and createsopportunities for new business models. However, many people are stuck in a "scarcity" modeland treat nearly-free resources as expensive, and find themselves doing traditional things thatdon't work anymore. Chris mentions [Second Life] as aneconomy where many resources are free, and seeing how people respond to that.Rather than focusing on making money, new businesses are focused on gainingattention and building their reputation. Here are some example business models:
Cross-subsidy: give away the razors, sell the razor blades; or give away cell phones and sell minutes
Ad-Supported: magazines and newspapers sell for less than production costs
Freemium: 99% use the free version, but a handful pay extra for something more
Digital economics: give away digital music to promote concert tours
Free-sample marketing: give away samples to get word-of-mouth advertising
Gift economy: give people an opportunity and platform to contribute like Wikipedia
Nick Carr writes a post [Dominating the Cloud], indicatingthat IBM, Google, Microsoft, Yahoo and Amazon are the five computing giants to watch, as they are more efficient atconverting electricity into computing than anyone else. Last month, I mentioned IBM and Google partnership on cloud computing in my post[Innovationthat matters: cell phones and cloud computing].Nick's upcoming book titled[The Big Switch] looks into "Utility Computing",comparing the change of companies generating their own electricity to using an electric grid, to the recent developments of cloud computing and software as a service (SaaS). Amazon's latest "SimpleDB" online databaseis cited as an example.
Last, but not least, Seth Godin writes in his post [Meatballs and Permeability] about the bits-vs-atoms issue, what Chris Anderson above refers to as the new digital economy. The idea here is that value carried electronically as bits (digital documents, for example) have completely different economics than value carried as atoms (physical objects), andrequires new marketing techniques. Methods from traditional marketing will not be effective in this new age.Here is a [review] of Seth's new book Meatball Sundae: Is Your Marketing Out of Sync?
All three of these books seem to be covering the same phenomenon, just from different viewpoints. I lookforward to reading them.
This is a reasonable question. Since Invista 2.0 came out months ago in August, and Invista 2.1 is rumored to be out by end of this month, why put out a press release now, rather than just wait a few weeks? Thesignificant part of this announcement was that EMC finally has their first customer reference.To be fair, getting a customer to agree to be a reference is difficult for any vendor. Some non-profitsand government agencies have rules against it, and some corporations just don't want to be bothered byjournalists, or take phone calls from other prospective customers. I suspect EMC wanted to put the good folks from Purdue University in front of the cameras and microphones before they:
In Moore's terminology, Purdue University would be a "technology enthusiast", interested in exploring the technologyof the EMC Invista. Universities by their very nature often see themselves as early adopters, willing to take big risks in hopes to reap big rewards. The chasm happens later, when there are a lot of early adopters, all willing to be reference accounts. The mainstream market--shown here as pragmatists, conservatives, and skeptics-- are unwillingto accept reference claims from early adopters, searching instead for moderate gains from minimal risks. They prefer references from customers that are similar in size and industry. Whether a vendor can get a product to cross this chasm is the focus of the book.
Why "SAN" virtualization?
Technically, Invista is "storage" virtualization, not "SAN" virtualization. Virtualizationis any technology that makes one set of resources look and feel like a different setof resources, preferably with more desirable characteristics. You can virtualizeservers, SANs, and storage resources.
Virtual SAN (VSAN) technology, supported bythe Cisco MDS 9500 Series Multilayer Director Switch, partitions a single physical SAN into multipleVSANs, allowing different business functions and requirements to share a common physical infrastructure.
How does Invista advance Cisco's VSAN functionality? It doesn't, but that doesn't makethe title a falsehood, or the press release by association full of lies.If you read the entire press release, EMCcorrectly states that Invista is "storage" virtualization. Some storagevirtualization products, like EMC Invista and IBM System Storage SAN Volume Controller (SVC), require a SAN as a platform for which to perform their magic.Marketing people might use the term "SAN" torefer not just the network gear that provides the plumbing, but also to include the storage devices that are attached to the SAN. In that light, theuse of "SAN virtualization" can be understood in the title.
More importantly, it appears that EMC no longer requires that you purchase new SAN equipment from themwith Invista. When the Invista first came out, it cost over a quarter-million US dollars to cover thecost of the intelligent switches, but with the price drop to $100K, I imagine this means theyassume everyone has an appropriately-supported intelligent switch already deployed.
Why this architecture?
In his post [Storage Virtualization and Invista 2.0], EMC blogger ChuckH does a fair job explaining why EMC went in this direction for Invista, and how it is different thanother storage virtualization products.
Most storage virtualization products are cache-based. The world's first disk storagevirtualization product, the IBM 3850 Mass Storage System, introduced in 1974, and thefirst tape virtualization product, the IBM 3494 Virtual tape Server, introduced in 1997, bothused disk cache in front of tape storage. Later virtualization products, like IBM SVC and HDS USP-V, use DRAM memory cache in front of disk storage, but the concept is the same.People are comfortable with cache-based solutions, because the technology is matureand well proven in the marketplace, and excited and delighted that these can offer the following features in a mixed heterogeneous disk environment:
instantaneous point-in-time copy
None of these features are provided by Invista, as there is no cache in the switch. Instead,Invista is a "packet cracker"; it cracks open each FCP packet, inspects and modifies the contents, then passes theFCP packet along to the appropriate storage device. This process slows down each read andwrite by some amount, perhaps 20 microseconds. The disadvantage of slowing down every readand write is offset by having other benefits, like non-disruptive data migration.
To compensate for Invista's inability to provide these features,EMC offers a second solution called EMC RecoverPoint, which is an in-band cache-based appliancesimilar in design to SVC, but maps all virtual disks one-to-one to physical disks. It offersremote distance asynchronous mirroring between heterogeneous devices.EMC supports RecoverPoint in front of Invista, but if you are considering buying bothto get the combined set of features, you might as well buy an IBM SVC or HDS USP-V instead,in one system, rather than two, which is much less complicated. IBM SVC and HDS USP-Vhave both "crossed the chasm" having sold thousands of units to every type and size of customer.
Hopefully, this answers the questions you might have about EMC Invista.
As we wrap up the year, people's thoughts turn to archive anddata retention.
The [Robert Frances Group] have put out a research paper titled Optimizing Data Retention and Archiving - November 2007 that helps IT executives understand the cost differences for a disk-only archive approach versus disk/tape archive approach and how an [IBM System Storage DR550] offering can help address the long-term storage archive requirements with a world-class storage strategy that reduces cost, improves efficiency and supports compliance. Here is an excerpt:
Ongoing legal, audit, and regulatory requirementswill continue to drive IT groups to improvearchive policies, processes, strategy, andefficiency. The choice of which technologies touse will have a profound impact on the success ofsuch efforts, since technologies like the DR550embody many aspects of the strategy, processes,and policies that must be decided upon. When itcomes to tape, IBM's DR550 is unique inproviding that support. Competitors tout disk-onlysolutions as the wave of the future, but researchindicates otherwise. The most basic benefits arecost and mobility, and despite the various vendorproclamations to the contrary, tape is still only afraction of the cost of disk and will remain so inthe foreseeable future.
This paper is yet another nail in the coffin of EMC Centera.In his post [Anyone Naughty on Your List…], Jon W Toigo points to an eBay fire sale of an EMC Centera Gen 4.
There has never been a better time to switch from EMC Centera to theIBM System Storage DR550.
Well, tomorrow is the Winter solstice, at least for those of us in the Northern hemisphere of the planet.As often happens, I have more vacation days left than I can physically take before they evaporateat the end of the year, so next week I will be off, going to see movies like the new["Golden Compass"]or perhaps read the latest book from [Richard Dawkins].
Next week, I suspect some of the kids on my block will be playing with radio-controlled cars orplanes. If you are not familiar with these, here's a [video on BoingBoing]that shows Carl Rankin's flying machines that he made out of household materials.
Which brings me to the thought of scalability. For the most part, the physics involvedwith cars, planes, trains or sailboats apply at the toy-size level as well as the real-world level. One human operator can drive/manage/sail one vehicle. While I have seen a chess master play seven opponents on seven chess boards concurrently, itwould be difficult for a single person to fly seven radio-controlled airplanes at the same time.
How can this concept be extended to IT administrators in the data center? They have to deal withhundreds of applications running on thousands of distributed servers.In a whitepaper titled [Single System Image (SSI)], the threeauthors write:
A single system image (SSI) is the property of a systemthat hides the heterogeneous and distributed nature of theavailable resources and presents them to users and applicationsas a single unified computing resource.
IBM has some offerings that can help towards this goal.
Even in the case where yourvehicle is being pulled by eight horses--(or eight reindeer?)--a single operator can manage it, holding the reins in both hands. In the same manner,IBM has spent a lot of investment and research into supercomputers, where hundreds of individualservers all work together towards a common task. The operator submits a math problem, for example,and the "system system image" takes care of the rest, dividing the work up into smaller chunksthat are executed on each machine.
When done with IBM mainframes, it is called a Parallel Sysplex. The world's largest business workloadsare processed by mainframes, and connecting several together and working in concert makes this possible.In this case, the tasks are typically just single transactions, no need to divide them up further, justbalance the workload across the various machines, with shared access to a common database and storageinfrastructure so they can all do the work equally.
Last August, in my post [Fundamental Changes for Green Data Centers], I mentioned that IBM consolidated 3900 Intel-based servers onto 33 mainframes. This not only saves lots of electricity, but makes it much easier for the IT administratorsto manage the environment.
Parallel Sysplex configurations often require thousands of disk volumes, which would have been quitea headache dealing with them individually. With DFSMS, IBM was able to create "storage groups" wherea few groups held the data. You might have reasons to separate some data from others, put them inseparate groups. An IT administrator could handle a handful of storage groups much easier than thousandsof disk volumes. As businesses grow, there would be more data in each storage group, but the numberof storage groups remains flat, so an IT administrator could manage the growth easily.
IBM System Storage SAN Volume Controller (SVC) is able to accomplish this for other distributed systems.All of the physical disk space assigned to an SVC cluster is placed into a handful of "managed diskgroups". As the system grows in capacity, more space is added to each managed disk group, but few IT administrators can continue to manage this easily.
The new IBM System Storage Virtual File Manager (VFM) is able to aggregate file systems into one globalname space, again simplifying heterogeneous resources into a single system image. End users have a singledrive letter or mount point to deal with, rather than many to connect to all the disparate systems.
Lastly we get to the actual management aspect of it all. Wouldn't it be nice if your entire data centercould be managed by a hand-held device with two joysticks and a couple of buttons? We're not quite there yet, but last October we announced the [IBM System Storage Productivity Center (SSPC)]. This is a master consolethat has a variety of software pre-installed to manage your IBM and non-IBM storage hardware, includingSAN fabric gear, disk arrays and even tape libraries. It lets the storage admin see the entire data centeras a single system image, displaying the topology in graphical view that can be drilled down using semanticzooming to look at or manage a particular device or component.
Customers are growing their storage capacity on average 60 percent per year. They could do this by havingmore and more things to deal with, and gripe about the complexity, or they can try to grow theirsingle system image bigger, with interfaces and technologies that allow the existing IT staff to manage.
My XO laptop arrived Friday, December 21, this was from the [Give 1 Get 1 (G1G1)] program fromthe One Laptop Per Child (OLPC) foundation. The program continuesto the end of this month (December 31).
Here are my first impressions.
Setup was Easy
Open the box, put in battery, and plug in the adapter. Enter your name and choose your favorite color for your stick figurine. No passwords, no parameters. Software is pre-installed and ready to use.
The four pages of instructions included how to open the unit (not intuitive), where the various connection ports are located, what the home screen and neighborhood screen look like, safety warnings, and a nice letter from Nicholas Negroponte with an 800 phone number and website in case more help is needed.
Connecting to the internet was the first thing I did. The neighborhood screen shows all the Wi-Fi access points. It recognized mineand three others. I clicked on mine, entered my WEP key, and was connected.
This is a Linux operating system running the Sugar user interface.There are four screens:
Neighborhood - shows all Wi-Fi access points
Friends - shows all other XO laptops nearby, in my case I am all alone
Home - your stick figurine with all the applications you can choose from are represented as icons at the bottom, just like OS X on my Mac Mini, or the launchpad on my Windows XP. Left panel for clipboard items.
Application - Applications run in full-screen mode
Four buttons across the top allow you to jump to any screen instantly.Everything else is single left-click. No double-clicks or right-clicks.
A circle on the home screen designates which applications are running, and how much of the available 256MB RAM they are consuming. This makes it easy to seeif you can run more applications or need to shut something down. Youcan jump to any application, or shut it down, from this view.
Shutting down the XO is done by clicking your stick figurine,and choosing shutdown.
I fired up the browser. The default 'home page' offers some help offline, as well as links to online resources and a google search bar. The full-color 1200x900 is very easy to read. You can hit ctrl+plus to make the fonts bigger. In bright sunlight, the screen turns automatically to greyscale.The built-in browser is easy enough to use, with standard back, forward, re-load, and bookmark buttons. The URL entry field also shows the pages title. It doesn't have tabs to see multiple pages at the same time, but I was able to fire up a second instance of the browser, so thatI could alt-tab back and forth between the two web sites.
There are so many applications that they don't all fit on the bottom of the screen.Left and right tab buttons will display the next set. I don't know if it is possible to re-order the icons, but I can certainly see some applications appealing to different ages, and perhaps re-ordering them into age-specific groups might be helpful.
Basic applications include the Abiword word processor, a PDF viewer, a simple paint program, calculator, chat, and news RSS feed reader; TamTam music to play and edit compositions; and some learn-to-program-a-computer software including Pippy, Etoys, and TurtleArt.
The 'record' program lets you take 640x480 pictures with the built-in camera, up to 45 seconds of video and audio recording. The picture abovewas taken with my XO, and edited online using [snipshot.com]. Another program can be usedto make video calls to another computer, similar to Skype or IBM Lotus Sametime.
The XO has built-in microphone and speakers, but also microphone and speaker ports, as well as three USB ports, and a slot for an SD memory card.
The QWERTY keyboard is designed for small children hands, I found myself using my two index fingers in a hunt-and-peck style. People who use Blackberry's or other hand-held devices might be able to use their two thumbs instead. Also, I am not used to a touchpad as the pointing device. My other laptops have a red knob between the G/H/B keys that acts like a joystick. So, I decided to attach my Apple keyboard/mouse to one USB port, which allows me faster typing and better resolution with my mouse.
I also inserted a 1GB SD card into the slot. Getting to the SD slot was challenging--you have to rotate the screen 90 degrees so that the lower right corner is over the laptop handle. It appears I need to purchase some tweasers to get my SD card back out, so until then, it will remain there as permanent addition to my XO.
A terminal application provides a command line interface into Linux.
The 'vi' editor is installed, in case I need to make changes to fstab or anythingelse in my /etc directory.
There is no S-video or VGA port. However, a teacher could probably fold thislaptop up in e-book mode and lay it flat on an [overhead projector] since the screen can handle bright sunlight in black-and-white mode.
The Journal and the Clipboard
There are no folders or subdirectories here. The journal acts as your desktop, holding all the files you have referenced, sorted in chronological order with the most recent on top. The journal application is started automatically when you boot up.My SD card is shown as a separate entry at the bottom right corner, but I have access only to files on my top-level directory on the card. The journal allows you to drag and drop between the system and the SD flash card.The list can be filtered by file type and application, so finding things is easy.You can also copy anything in the journal to the clipboard, appearing on the leftpanel of the home screen. You can then launch or paste this into other applications.
Pressing Alt-1 takes a 1200x900 snapshot of the current screen, and puts it into the journal.On websites that allow you to upload a file, including GMAIL, snipshot.com, etc. the browse button brings up the journal. So, for example, you could take a snapshot of the current webpage or paint creation, and send it as an attachment to someone via GMAIL. Google has an XO-enabled version of GMAIL that you can download from the OLPC activities page.
This entire post, including the picture above, was done with the XO laptop itself. I am impressed with the thought that went into this design, and I see great potential here. The interface adequately hides the Linux operating system for those who just want to use the computer, but makes it readily accessible for those who want to learn more about the Linux operating system and computer programming.
Continuing my week's theme on the XO laptop from the One Laptop Per Child [OLPC] project, I have been amused watching the OLPC forum discussion on the choiceof browser options available.
The built-in browser is simple but functional. It is full screen,with back, forward, and bookmark buttons, and an entry field forthe URL. This browser is fully integrated with the Sugar platform,files downloaded will appear in the journal. Download an Activity*.xo file, for example, and you can install it from the Journal.If you want to upload a file, click BROWSE on the website, and theJournal will pop up to choose files from.
Out of the box, the XO supports a minimal Flash that can handlesome Flash-based games but not YouTube videos, and does not supportJava.
The good folks of Opera have built a special edition for the XO laptop.However, some settings need to be changed to make the fonts large enoughto read.
Opera can be run as a Sugar activity, but this just launches a mothertask, which in turn launches a daughter task that actually runs thebrowser. This means that Home View will have two icons. The mothertask has an the Opera icon, but click on it and you get a grey screen.The daughter task appears as a grey circle, click on it and you get thebrowser screen. Alt-Tab will rotate through the Activities, so thegrey screen of the mother task is part of the rotation.
Although Opera has one foot on the Sugar platform, and one foot off,the lack of integration means poor interaction with the journal. The use of Opera is correctly registered. However, downloadingfiles requires a working knowledge of subdirectories, and uploading anythingrequires knowing what it is called, and where it is located. Not obviousfor many of the items created by Sugar applications.
The XO laptop is based on Redhat Fedora distribution, so I downloadedthe Firefox RPM package and installed this. To run, you need to startthe Terminal Activity, and then at the cursor type firefox.Journal only registers that the Terminal activity was used, but not anythingelse.
Since I run Firefox 2.0 on Windows XP, OS X and Linux, I am very familiarwith this browser, and it works as expected. Like Opera, there are shortcut keys, tabs for multiple pages, and optionsto add Java and Flash player. I was able to install add-onsfor Del.icio.us and FireFTP, and they worked as expected. Having accessto FTP sites will make development on the XO much easier.Again all files are uploaded/downloaded to directories, so some workingknowledge of where files are placed is required.
The fonts in Firefox did not expand/shrink as nicely as they had in Opera.Be careful not to select "View->
To close, you have to select File->Quit from the browser window, whichbrings you back to the Terminal activity, which you can then shutdown with Ctrl-Esc.
For now, I will keep all three and continue to evaluate them.I saw a few opportunities for improvement:
The Opera and Terminal icons are not on the first screen.You have to hit the right arrow to get to the "overflow" set of icons. Re-ordering the icons is simply a matter of editing the following file with "vi"(my first few lines I use are shown below):
Put the activities in the order you want. Any activity not listed willappear after these.
It might be possible to create a modified Terminal activity thatinvoked Firefox directly, to eliminate having to type it in each time.
Several people have expressed interest in a browser that runs entirely withthe Xo laptop folded over in eBook/Game mode, such that thekeyboard is completely covered up, exposing only the up-left-right-down arrowsand the Circle/Square/X/Check buttons.
Change the "News Reader" to invoke Bloglines instead. This might be yetanother modified Terminal activity, but borrow the icon from News.
Well, if you have further thoughts on these browsers, enter a comment below.
Wrapping up this week's theme on the XO laptop, I decided to take on thechallenge of printing. I managed to print from my XO laptop to my laserjet printer.I checked the One Laptop Per Child [OLPC] website,and found there is no built-in support for printers, but there have been several peopleasking how to print from the XO, so here are the steps I did to make it happen.
(Note: I did all of these steps successfully on my Qemu-emulated system first, and then performed them on my XO laptop)
Step 1: Determine if you have an acceptable printer
The XO laptop can only connect to a printer via USB cable or over the network.Check your printer to see if it supports either of these two options. In my case, my printer is connected to my Linksys hub that offers Wi-Fi in my home.
The XO runs a modified version of Red Hat's Fedora 7, so we need to also determineif the printer is supported on Linux.Check the [Open Printing Database]for the level of support. This database has come up with the following ranking system.Printers are categorized according to how well they work under Linux and Unix. The ratings do not pertain to whether or not the printer will be auto-recognized or auto-configured, but merely to the highest level of functionality achieved.
Perfectly - everything the printer can do is working also under Linux
Mostly - work almost perfectly - funny enhanced resolution modes may be missing, or the color is a bit off, but nothing that would make the printouts not useful
Partially - mostly don't work; you may be able to print only in black and white on a color printer, or the printouts look horrible
Paperweight - These printers don't work at all. They may work in the future, but don't count on it
If your printer only supports a parallel cable connection, or does not have a high enough ranking above, go buy another printer. The [Linux Foundation] websiteoffers a list of suggested printers and tutorials.
In my case, I have a Brother HL5250-DN black-and-white laserjet printer connected over a network to Windows XP, OS X and my other Linux systems. It is rated as supporting Linux perfectly, so I decided to use this for my XO laptop.
Step 2: Install Common UNIX Printing System (CUPS)
Technically, Linux is not UNIX, but for our purposes, close enough. Start the Terminalactivity, use "su" to change to root, and then use "yum" to install CUPS. Yum will automatically determine what other packages are needed, in this case paps and tmpwatch. Once installed, use "/usr/sbin/cupsd" to get the CUPS daemon started, and add this to the end ofrc.local so that it gets started every time you reboot.
Click graphic on the left to see larger view
[olpc@xo-10-CC-6F ~]$ subash-3.2# yum install cups...Total download size = 3.0 MIs this OK [y/N]? y
To download the appropriate drivers, you may need a browser that can handle file downloads. I have triedto do this with the built-in Browse activity (aka Gecko) but encountered problems. I have both Opera and Firefox installed, but I will focus on Opera for this effort.I also installed the older220.127.116.11 version of the Flash player (worked better than the latest 18.104.22.168 version) and Java JRE.Follow the OLPC Wiki instructions for [Opera, Adobe Flash,and Sun Java] installation, thenverify with the following [Java and Flash] testers.
Step 4: Download drivers and packages unique for your printer
In my case, I used Opera to get to the [Brother Linux Driver Homepage], and downloaded the RPM's for LPR and CUPS wrapper. These are the ones listed under "Drivers for Red Hat, Mandrake (Mandriva), SuSE". I saved these under "/home/olpc" directory.
By default, the root user has no password. However, you will need it to be something for later steps,so here is the process to create a root password. I set mine to "tony" which normallywould be considered too simple a password, but ignore those messages and continue.We will remove it in step 8 (below) to put things back to normal.
[olpc@xo-10-CC-6F ~]$ subash-3.2# passwdChanging password for user root.New UNIX password: tonyBAD PASSWORD: it is too shortRetype new UNIX password: tonypasswd: all authentication tokens updated successfullybash-3.2# exit[olpc@xo-10-CC-6F ~]$
Step 6: Launch CUPS administration
Here I followed the instructions in Robert Spotswood's [Printing In Linux with CUPS] tutorial.Launch the Opera browser, and enter "http://localhost:631/admin" as the URL. The localhostrefers to the laptop itself, and 631 is the special port that CUPS listens to from browsers. You can alsouse 127.0.0.1 as a shortcut for "localhost", and can be used interchangeably.
In my case, it detected both of my networked printers, so I selected the HL5250DN, entered thelocation of my PPD file "/usr/share/cups/model/HL5250DN.ppd" that was created in Step 4. I set the URI to "lpd://192.168.0.75/binary_p1" per the instructions [Network Setting in CUPS based Linux system] in the Brother FAQ page. I chage the page size from "A4" to "Letter".I set this printer as the default printer. When it asks for userid and password, that is whereyou would enter "root" for the user, and "tony" or whatever you decided to set your root password to.
Select "Print a Test Page" to verify that everything is working.
Step 7: Printing actual files
Sadly, I don't know Opera well enough to know how to print from there. So, I went over to my trustedFirefox browser. Select File->Page Setup to specify the settings, File->Print Preview tosee what it will look like, and then File->Print to send it to the printer.
To print the file "out.txt" that is in your /home/olpc directory, for example, enter"file:///home/olpc/out.txt" as the URL of the firefox browser. This will show the file,which you can then print to your printer. I had to specify 200% scaling otherwise the fontswere too small to read.
Step 8: Remove the "root" password
If you want to remove the root password, here are the steps.
[olpc@xo-10-CC-6F ~]$ suPassword: tonybash-3.2# passwd -d rootRemoving password for user root.passwd: Successbash-3.2# exit[olpc@xo-10-CC-6F ~]$
Now the problem is that there is no way to print stuff from any of the Sugar activities. The best place toput in print support would be the Journal activity. Along the bottom where the mounted USB keys arelocated could be an icon for a printer, and dragging a file down to the printer ojbect could cause it tobe send to the printer.
The alternative is to write some scripts invocable from the Terminal activity to determine what isin the journal, and send them to LPR with the appropriate parameters.
I did not have time to do either of these, but perhaps someone out there can take on that as a project.
Continuing my week's theme on the XO laptop from the One Laptop Per Child [OLPC] foundation, I successfully managedto emulate my XO on another system.
Part of what is attractive of the XO laptop is the hardware, the high-resolution200dpi screen, the clever screen that rotates and folds flat into an eBook reader,and the water-tight, dust-proof keyboard. The other part is the software, howthey managed to pack an entire operating system, with useful applications, intoa 1GB NAND flash drive.
The drawback for developers like me is the risk of changing something that breaks the system. For example, my first attempt to create my own activityresulted in a blank space in my action bar, and my journal went into someinfinite loop, blinking as if it were still loading for minutes on end. I fixed it by deleting out the activity I created and rebooting.
To get around this, I successfully ran the disk-image under Linux's Virtual Machinesoftware called Qemu. This is an open source offering, with a proprietary add-onaccelerator called Kqemu. Here were the steps involved:
Base Operating System
Qemu is now available to run on Linux, Windows and OS X-Intel. I have an Ubuntu 7.04"Feisty Faun" version of Linux installed on my system from a project I did last year, so decided to use that.
Normally, "apt-get install qemu" would be enough, but I wanted to get the latest release, so I downloaded the [0.9.0 version]tarball of compiled binaries. Note that trying to compile Qemu from source requiresa downlevel gcc-3.x compiler, and my attempts to do this failed. The compiled binariesworked fine.
The Kqemu author hasn't packaged this for distribution, so I download the source code anddid my own compiles. You can do the "configure-make-install" using the regular gcc 4.1compiler and it went smoothly.
Getting Kqemu active was bit of a challenge. I had to make sense of Nando Florestan's[Installing Kqumu in Ubuntu] article,and the subsequent comments that followed.
There is a tiny [8MB Linux image]that should be used to verify the Kqemu is activated correctly.
The Disk Image
As with other development efforts, there are the older stable versions, and the bleedingedge development versions. I chose the 650 Build from the [Ship.2 stable versions], whichmatches the version on my XO laptop. The image comes as a *.bz2, which is a highly-compressedfile. Using "Bunzip2", the 221MB file expands to something like 932MB.
I renamed the resulting file to "build650.img"
Once I got all this done, I then made a simple script "launch" in my /home/tpearson/bin directory:
Then "launch build650.img" was all I needed to run the emulation. The full-screen mode helpsemulate the view on XO laptop. I was able to change the jabber server to "xochat.org" and see otherXO laptops online on my neighborhood view.
When running under Qemu, you can't just press Ctrl-Alt-something. For example, Ctrl-Alt-Erase onthe XO reboots the Sugar interface. However, do this on a Linux system, and it reboots your nativeX interface, blowing away everything.Instead, you press Ctrl-Alt-2 to get to the Qemu console, designated by (qemu) prompt,and then type:
Press "Ctrl-Alt-1" followed by "Ctrl-Alt" to get back to the emulated XO screen.
With this emulation, I am more likely to try new things, change files around, edit system files,and so on, without worrying about rendering my actual XO laptop unusable. Once debugged, I canthen work on moving them over to my XO, one at a time.
Yesterday, I was able to get the "Build 650" up and running under Qemu emulation onmy Thinkpad laptop computer. Today, I was able to get my Thinkpad and my XO laptoptalking to each other for a "chat".
The built-in "Chat" activity is one of the many kid-friendly activities included onthe XO laptop for the One Laptop Per Child [OLPC] project.It is also possible for two or more people to share other activities, like editing a textdocument, or browsing the internet.
As they say, emulation is only 95% complete, and this is true in this case as well. My Thinkpaddoes not have a built-in video camera, and for some reason the Qemu emulation does not let mehear any sound, despite specifying "-soundhw es1370" parameter. And lastly, it doesn't have the"mesh network" built-in Wi-Fi capability, just standard 54Mbps 802.1g through my Linksys router.
So, I set both XO and Thinkpad to use the new "xochat.org" jabber server so that the two couldsee each other:
$ sugar_control_panel -s jabber xochat.org
I set my XO nickname to be "TonyP" and my Thinkpad to be "Pearson", and chose blue-orange forthe first, and orange-blue for the second.
The process of starting a chat is similar to other IM systems like IBM Lotus Sametime. You havea neighborhood view that shows all people online using the same jabber server. In my case therewere about 30 or so icons on the screen. From the colors on my XO, I was able to locate my Thinkpad,and invite him to a chat. You can share the chat with everyone on the network, or keep it privatebetween two people. I tried both ways to see the difference.
In a private two-way chat, the first person starts up their Chat activity, and sends an inviteto join to another person. The second person sees a flashing chat bubble on the bottom of thescreen to the left of all the other action bar icons. The difference is that the chat bubble isblue-orange matching the sender, rather than black-and-white of the rest of the icons.
If the recipient happens to be busy doing something else full-screen, like browsing the web, theredoesn't seem to be any interruption. It is only when he goes to "home view" will he see the coloredchat bubble and decide to join or not.
The chat itself colorizes the text to match to color of the participant's icons. Blue for one, and orangefor the other. It two people had identical color schemes I guess it might be hard to tell. Thetext is white, so it is best to choose darker colors for contrast.
A nice feature is that you can save your chat session with the "keep" button on the upper rightpart of the screen, and your dialogue discussion will show up as an entry in the "journal".
Using this technique, it is possible for someone who has one "XO" laptop and one regular computer,or two regular computers, to develop and test applications that involve the sharing aspect of educational opportunities. Chats can be between students, student-to-teacher, or event student-to-mentor.
Well, it's the last day of the year, and I will be celebrating the new year soon.In the mean time, I leave you with an interesting triple combo related to information.
Nick Carr in his post [Cleaning the Slate] offers a list of articles he did not have time for in 2007.Of these, I enjoyed the 7-page keynote address[Information, Knowledge, Authority and Democracy] by Hunter R. Rawlings III.He talks about the importance of recorded knowledge, including discussions by the US founding fathers Thomas Jefferson and James Madison, and how information is an essential part of democracy.Here's a brief excerpt:
Following the burning of the Capitol in 1815,President James Madison restored the Library of Congress by purchasing ThomasJefferson’s library for the nation. It was Jefferson’s unique classification scheme that thefirst full-time Librarian of Congress, appointed by Madison, used in reorganizing theLibrary. The United States, embodied in the Congress, was to have the best library inthe world because knowledge was necessary to its fundamental purpose, the creationand protection of liberty.
James Madison believed, in other words, that he lived in a “knowledge age.” In ourmyopic way, we like to think that we invented the knowledge age sometime late in the20th century. We did not. Madison and his contemporaries had complete faith andconfidence in the necessity of what they called “useful knowledge,” which, of course,privileged many things we no longer consider useful, such as the ability to read Latinand Greek and to understand the lessons of ancient history.
...by employing collaborative filtering, you use other people’s time to weed out the things that would waste yours. In fact, Del.icio.us and Stumble Upon polls your friends and people with similar interests for the most crucial sources of information and anything else you might have accident skipped over. If The Wisdom of Crowds has taught us anything, it is that a large group of people is drastically more efficient than you’ll ever be on your own.
Unless you enjoy grinding yourself to the bone, use this principle—whether you call it “crowdsourcing” or otherwise—to stop drinking from the information fire hose. It’s not more information, it’s better information, that distinguishes the real winners in business and life.
Finally, Galacticast presents [A Copyright Carol],a humorous 5-minute parody video on what might happen in the future as a result of lawslike the Canadian Digital Millennium Copyright Act[DMCA].