Lab Tour - Steward Observatory Mirror Lab
|
|
Visits (9603)
Most readers know thta Tucson is home of one of the largest collections of world-renowned experts on IT storage. But what you may not know, is that Tucson is also the home of experts for optical sciences. This week, I was part of a delegation of IBMers invited on a tour of the Steward Observatory Mirror Lab [SOML]. ![]() SOML was built in 1990 underneath the football stadium at the University of Arizona. Why under the stadium? Their motivation was [Chicago Pile-1], the world's first nuclear reactor, built by Enrico Fermi under the football stadium at the University of Chicago. We got to see all aspects of the process to develop the huge mirrors used in large telescopes. SOML did not always offer lab tours. Back in 1993, two dozen members of the Earth First! terrorist organization [attacked the lab with hammers and monkey wrenches to destroy and dismantle the mirror lab]. Now, security is tight to ensure no-one damages these mirrors, some of which fetch as much as $30 million dollars. ![]() At other mirror labs, mirrors start as a large, heavy, flat piece of glass and then ground and polished to the correct parabolic curve. SOML created a new process that works a lot better, similar to making a [Pineapple Upside Down Cake]. For those who are not familiar with this cake, you arrange sliced pineapple rings on the bottom of the baking dish, then pour the liquid cake batter that fills in and around the pineapple slices, then bake. The first step is creating a base of 1,690 hexoganal tubes made of Aluminum Silicate. These are like the pineapple rings in the cake. The tubes are bolted to the baking dish that is 8.4 meters wide. These tubes form the base of the [parabolic shape] that focuses starlight to a small focal point. The tubes are spaced with about an inch of space in between. The Aluminum silicate feels like clay. ![]() Once the base is built, chunks of glass are placed on the surface. Rather then pouring on the cake mix of molten glass, these chunks will be melted in place. This isn't normal glass, but a special Boron Silicate glass that does not expand or contract much during changes in temperature, made by the [Ohara Corporation] in Japan. ![]() The oven is then lowered onto the baking dish. Once the temperature reaches 700 degrees, the entire system is then rotated at 7 RPM. This allows the glass to melt and take its parabolic shape through [centrifugal force]. The people who run the oven are called "oven pilots", and they monitor the entire process to make sure nothing goes wrong. This particular mirror is one of the two that will go into the [Large Binocular Telescope]. The mirror will be 36 inches thick at the edges, and 18 inches in the middle. If the glass cools down to quickly, it may crack or form crystals, so instead the oven is kept in place and the temperature lowered slowly over the course of a few months. This is called annealing. ![]() Once a mirror has annealed, 24 suction cups are glued to the top surface to pull the mirror out of the baking dish. It is then tipped on its side so that all the bolts can be removed and the hexagonal tubes washed out, leaving behind a honey-combed effect on the bottom of the mirror. This means the mirror is 80 percent air, making it strong and lightweight. ![]() The next step is grinding the surface with diamonds. In most cases, the process of spinning creates the correct shape so little grinding is required. However, for this mirror here for the Large Synoptic Survey Telescope [LSST], about five tons of glass will be ground out of the center. This will actually have two parabolic curves, the outer curve is shallow, and the inner curve is deep. This will allow for the LSST to survey a wide area of space at a time. ![]() Once the glass is ground to the right shape, it will be polished with Cerium Oxide, what is commonly known as Jeweler's Rouge. How smooth does it have to be? If this mirror were the size of the United States, there would be no bump higher than 2 inches tall! Most mirrors are symmetrical, so the polishing can be done on a spinning platform, but this mirror is not. The Large Magellan Telescope will consist of seven mirrors, one in the middle that is symmetrical, and surrounded by six other mirrors that will all continue the parabolic shape in each direction. This is one of the outer mirrors, which means that each part of the polishing process will be controlled by computers to get exactly the curve required. ![]() Here is a small scaled-down model of the Magellan Telescope. Each of the seven mirrors will be 8.4 meters wide. At this point, one person asked why all the mirrors were 8.4 meters wide. I joked that this was the size of the oven! It reminded me of [the story where newly-wed had to ask her grandmother why she cut the ends off the pot roast]. The actual reason was that the posts of the football stadium are 8.5 meters wide, so any mirror made inside the lab larger than that could not be removed easily for transportation. ![]() The LMT will be installed on [Cerro Tololo] in Chile, where my father worked earlier in his career. Why Chile? Observatories need high altitude, dry climate and clear skies. That is why Arizona is home to many observatories, including Kitt Peak National Observatory and the Vatican Observatory on Mount Graham. Cerro Tololo in Chile is close to the equator and meets these requirements. Once operational in 2020, it will gather 6 TB of images every evening. That got all of the IBMers on the tour very excited! To verify the polishing is complete, it is put on three red stands and measured with a laser. Once the measurements are complete. The surface will be coated with aluminum to provide the reflective surface. You can't just paint the surface with a roller! Instead, the aluminum is vaporized and allowed to land on the surface of the mirror evenly, in a layer that is only three molecules thick. There is more aluminum in standard size beer can than on the surface of one of these 8.4 meter size mirrors! So that was the tour. It took almost 2 hours. If you are ever in Tucson, consider contacting the SOML and arranging a tour for yourself. There is no other mirror lab like it!
Tags:  kitt+peak lmt lbt ibm cerro+tololo lsst soml |
IBM Pulse2012 in Las Vegas
|
|
Visits (9257)
![]() This week is IBM Pulse2012 conference in Las Vegas. I am not there, for medial reasons this time. While my colleagues will be spending this week sipping Margaritas and enjoying the music in between inspiring technical sessions, I will be flat on my back, getting all my nutrients from a tube connected to my arm, listening to the hospital equivalent of [Muzak]. I found a great write-up from fellow blogger Jason Buffington from ESG. Here are some excerpts from his post [IBM Pulse 2012 — Day One Keynote]: "IBM Pulse 2012 ‘s opening keynote talked about the realities of cloud as a delivery model – without the ‘private-‘, or the ‘public-‘, or even the quotes or capitalization of “The Cloud.” It was IBM’s perspective on what IBM knows better than most, how to deliver enterprise IT services that map to strategic business goals." "In contrast to talking about ‘data-center/cloud’ stuff and then later about ‘con "...cloud-based delivery was ‘more than just virtualization’" "...the US Dept of Labor stating that jobs related to technology are forecast to be among the fastest growing segment thru 2018." Hopefully, this post will hold you over until I regain consciousness. Tags:  mobility ibm esg cloud jason+buffington |
IBM Pulse2012 Video Library Now Available![]() Did you miss IBM's Pulse 2012 conference? So did I. Last month, I told you all to [mark your calendars], but wasn't sure if I would be there myself or not. I was invited to attend Pulse this year, but had to instead go to the Hospital for surgery and spend the week recovering. I thought I made that clear on my last post that I would be spending [the week on my back, with a tube in my arm], but apparently, people missed that subtlety. The tube was actually connected to the back of my left hand, and I was tempted to take pictures of the entire process, but decided not to, since my gown had no pockets to hold my camera. Perhaps it is better it went undocumented. The less you see of the inner workings of a hospital, as a patient, the better. The whole things was quite a blur. Despite a few mishaps, I managed to survive the week. Many thanks to Hilda, Dina, Crystal, Marcie, Mike, Joe, Ryan, Sue, Debra, Donna, Modrechai, and the rest of the fine medical staff at St. Joseph's for their hospitality! And of course, many thanks to Mo, my parents and sisters for helping me through the recovery! Fortunately, for those like me who were unable to go to Las Vegas last week, there is the [IBM Pulse2012 Video Library] with highlights of the keynotes and other sessions during the week. Enjoy!
Tags:  pulse2012 ibm |
Is this what HDS tells our mainframe clients?Five years ago, I sprayed coffee all over my screen from something I read on a blog post from fellow blogger Hu Yoshida from HDS. You can read what cased my reaction in my now infamous post [Hu Yoshida should know better]. Subsequently, over the years, I have disagreed with Hu on a variety of of topics, as documented in my 2010 blog post [Hu Yoshida Does It Again]. (Apparently, I am not alone, as the process of spraying one's coffee onto one's computer screen while reading other blog posts has been referred to as "Pulling a Tony" or "Doing a Tony" by other bloggers!) Fortunately, my IBM colleague David Sacks doesn't drink coffee. Last month, David noticed that Hu had posted a graph in a recent blog entry titled [Additional Storage Performance Efficiencies for Mainframes], comparing the performance of HDS's Virtual Storage Platform (VSP) to IBM's DS8000. ![]() For those not familiar with disk performance graphs, flatter is better, lower response time and larger IOPS are always desired. This graph implies that the HDS disk system is astonishingly faster than IBM's DS8000 series disk system. Certainly, the HDS VSP qualifies as a member of the elite [Super High-End club] with impressive SPC benchmark numbers, and is generally recognized as a device that works in IBM mainframe environments. But this new comparison graph is just ridiculous! (Note: While SPC benchmarks are useful for making purchase decisions, different disk systems respond differently to different workloads. As the former lead architect of DFSMS for z/OS, I am often brought in to consult on mainframe performance issues in complex situations. Several times, we have fixed performance problems for our mainframe clients by replacing their HDS systems with IBM DS8000 series!) Since Hu's blog entry contained very little information about the performance test used to generate the graph, David submitted a comment directly to Hu's blog asking a few simple questions to help IBM and Hu's readers determine whether the test was fair. Here is David's comment as submitted: "Hello, Hu, Unlike my blog on IBM, HDS bloggers like Hu are allowed to reject or deny comments before they appear on his blog post. We were disappointed that HDS never posted David's comment nor responded to it. That certainly raises questions about the quality of the comparison. So, perhaps this is yet another case of [Hitachi Math], a phrase coined by fellow blogger Barry Burke from EMC back in 2007 in reference to outlandish HDS claims. My earliest mention was in my blog post [Not letting the Wookie Win]. By the way, since the test was about z/OS Extended Address Volumes (EAV), it is worth mentioning that IBM's DS8700 and DS8800 support 3390 volume capacities up to 1 TB each, while the HDS VSP is limited to only 223 GB per volume. Larger volume capacities help support ease-of-growth and help reduce the number of volumes storage administrators need to manage; that's just one example of how the DS8000 series continues to provide the best storage system support for z/OS environments. Personally, I am all for running both IBM and HDS boxes side-by-side and publishing the methodology, the workload characteristics, the configuration details, and the results. Sunshine is always the best disinfectant!
Tags:  eav mainframe ds8800 ds8000 vsp hds usp ibm hu+yoshida |
Cover-Up is Worse than the Original CrimeOn my last blog post [Is this what HDS tells our mainframe clients?], I poked fun at Hu Yoshida's blog post that contained a graphic with questionable results. Suddenly, the blog post disappeared altogether. Poof! Gone! Just so that I am not accused of taking a graph out of context, here is Hu's original post, in its entirety:
At this point, you might be wondering: "If Hu Yoshida deleted his blog post, how did Tony get a copy of it? Did Tony save a copy of the HTML source before Hu deleted it?" No. I should have, in retrospect, in case lawyers got involved. It turns out that deleting a blog post does not clear the various copies in various RSS Feed Reader caches. I was able to dig out the previous version from the vast Google repository. (Many thanks to my friends at Google!!!). The graph itself was hosted separately has been deleted, but it was just taken from slide 10 of the HDS presentation [How to Apply the Latest Advances in Hitachi Mainframe Storage], so it was easy to recreate. (Lesson to all bloggers: If you write a blog post, and later decide to remove it for whatever legal, ethical, moral reasons, it is better to edit the post to remove offending content, and add a comment that the post was edited, and why. Shrinking a 700-word article down to 'Sorry Folks - I decided to remove this blog post because...' would do the trick. This new edited version will then slowly propagate across to all of the RSS Feed Reader caches, eliminating most traces to the original. Of course, the original may have been saved by any number of your readers, but at least if you have an edited version, it can serve as the official or canonical version.) Perhaps there was a reason why HDS did not want to make public the FUD its sales team use in private meetings with IBM mainframe clients. Whatever it was, this appears to be another case where the cover-up is worse than the original crime!
Tags:  hu+yoshida vsp eav hds |