All that matters is getting things done.

Back in 2008, I authored a set of cloud/SaaS-focused articles for the higher education online magazine, The Greentree Gazette. In the four-part series, I explored the potential shift of a broad range of campus services from being locally-provisioned to being offered in the cloud. A decade later, the prediction came true: cloud services are rolling across higher education in many forms, from learning platforms (Canvas, Blackboard, etc.) to enterprise applications (Workday, Salesforce, etc.), and transforming the campus digital ecosystem.

At that time, I hypothesized that “all that matters is getting things done” — which I still believe is fundamentally true. But since then, we’ve learned that what also matters is increasing degrees of organizational agility to anticipate, respond and adapt to shifting consumer needs of students, faculty, customers and employees. It requires different skills, roles, collaborations and tools than a decade ago. The shift to the cloud is a transformative event.

Anyhow, I’ve republished the series of articles here that have been lost to time (The Greentree Gazette is no longer online, but does partially surface within the Internet Archive). I hope you enjoy a look back at one person’s view of what today might look like — albeit a little dated.


Clouds Settle on Campus

The following material was originally published on August 11, 2008 as a series of four articles for the online edition of “The Greentree Gazette”.

Part 1

How often do you think about the path your data takes from your computer to say, Amazon or Wikipedia? Outside of those involved in networking and security, most people don’t give the route data takes a second thought; one types something in a browser and within a few seconds content is displayed on the screen. Behind the scenes however, a huge number of transactions occur between the computer and network routers, domain name servers, firewalls, switches, etc. until the request is received by the service at the other end which in turn, responds in a similar manner back to the requesting computer with an answer moments after one clicked the mouse.

Network architects and engineers often refer to the infrastructure that enables connections between individual computers and the systems and servers on the Internet as being part of the “network cloud.” By being linked to the cloud, how your information connects from point A to point B is largely irrelevant as long as the information is transmitted and received safely. Like electricity and water, today we don’t think about the network and how it works until something goes wrong.

In the 1990s, Ian Foster (University of Chicago) and Carl Kesselman (University of Southern California) published “The Grid: Blueprint for a new computing infrastructure” in which they described a model where computation is shared over the Internet in a manner similar to power distribution – namely, computational cycles, like electricity, should be available on the grid as a commodity for all to use. Like electrical distribution and its fabric of regional power grids, there are multiple computational grids that adhere to a core set of standards but vary greatly on access rules, scale, scope and size. Despite the variety, one thing has remained the same: grid-based computation is built upon a notion of shared and commoditized resources.

As one might expect, the scientific community and those involved in high-performance computing naturally gravitated toward the grid as a model for a new generation of problems. From high energy physics to genomics, grid computing has played an important role in large-scale shared research. However, what if one could harness what has been learned by the global scientific community in terms of grid computation and rather than scale up, scale down or for that matter, scale in any manner one wants? What if one didn’t care where servers were housed or who operated them and just wanted to run the service, not the infrastructure? What if servers and data storage, like the network, operated in “the cloud” – shared, commoditized, on-demand, and available whenever one needed them? What if one could focus on service definition and delivery by pushing the technology infrastructure to the background? 

Such is the promise of cloud computing and something that is happening today. If networking were the wispy cirrus clouds of a foretold future and the grid altocumulus ones, then there may be many more coming our way and my forecast is that clouds won’t stop at technology. Clouds are going to reshape the way we deliver services on campus.

Part 2

On a typical network diagram, the Internet is often represented as a big puffy cloud with many lines connecting to it. The idea behind the diagram is rather straightforward: a connection from one point to another will be made in a way such that the details of that connection are unimportant. As long as things run smoothly, data moves around, and work gets done, the cloud is all that one needs to know.

Not that long ago, application service providers (ASPs) were all the rage. Campuses and companies alike started questioning the need for the array of enterprise software running in their environments. Financial issues, limited staffing, expertise challenges, and other reasons drove some to shed applications that simply didn’t make sense to run locally, but still had business reasons to exist. Many campuses elected to turn to ASPs to take care of tasks such as handling job applications or processing payroll. What makes the ASP model work is that it decouples how one handles a set of tasks from the technology required to process them. Posting jobs and processing checks is pretty much the same for everyone – how each campus handles the workflow around these steps is unique.

Although the use of ASPs has not gone without a few storms, many campuses have learned that it is sometimes less expensive to change local business processes than it is to change enterprise software. So what if one fully decoupled business process and workflow from the software? Could a campus begin to think of software like the network, namely something that runs in a cloud? Quite possibly, and one concept to consider is “software as a service” (SaaS).

The basic idea behind SaaS is simple: take a well-known and established business process, create an easy-to-use web application for it, host it somewhere on the Internet, and leverage economies of scale to make it available at an attractive price. The assumption is that organizations will adapt their business processes to the software because it would be too expensive to do something custom themselves. Basecamp, a project management service by 37 Signals (www.basecamphq.com), is a prime example as for a low monthly fee, one can have sophisticated web-based project management tools available in moments. No setup, no servers, and no local software to run. All one has to do is sign up, adjust to Basecamp’s approach to project management, and start using the software. If the team requires more from Basecamp, resources are available on demand.

Before one writes off changing a local business process as being impossible, consider this: over 1 million people have signed up for Basecamp – from individuals in major corporations to those within my own institution. That represents a million adaptations to 37 Signal’s software, not 1 million customizations of a locally-installed package.

What is interesting about Basecamp and so many applications like it is how software as a service seems to respond to the expanding need for tools that enable teams, groups, and communities to work together more efficiently with minimal investments in technology and time. The web browser is the delivery platform, the network cloud provides global connectivity, and SaaS enables collaboration anytime, anywhere over the Internet. Provided the tools match the need, technology gets out of the way so that work can get done. When things run smoothly, how the technology works is, well, unimportant.

Sounds a lot like another puffy cloud.

Part 3

Some have said that “clouds are nothing more than condensed vapor” implying that cloud computing is more hype than reality. The silver lining in this analogy is that condensed vapor produces rain – something very tangible and important out of something quite ethereal.

Clouds can be thought of as providing general services that can be used in ways that are defined by users at the time of need. It is when these services are combined – or some might say, condensed – into an individual’s or group’s activities that they gain purpose and meaning. The meaning is derived by users, not software developers.

Imagine a project where an editorial team from across the country needs a simple database to keep track of articles submitted by authors. Recent tradition dictates that one would need set up a web and database server with custom programming to deliver the application to the team. In the process, the team would have to select, procure and configure the server hardware, operating system, database, and web application environment. Depending on resources, it may take a couple of months to get the system up and running. The problem is that today people expect services to be available when they need them. Waiting a few weeks or months to get something operational is becoming less and less acceptable. Why? I blame it on that “condensed vapor.”

Alternatively, the editorial team could cloud services from a provider such as Amazon. AWS, Amazon Web Services (www.amazon.com/aws), is a pay-as-you-go program that provides resources within their datacenters for others to use. Services include computation and server capacity, storage, and limited databases. Instead of spending time dealing with low-level technology decisions, the team could focus on creating its tool for its own use. Assuming a web programmer is available to the team, the application could be running within a few weeks for a minimal ongoing charge for the use of AWS. The base technology layer would be in the cloud handled by Amazon.

Let’s take this example one step further and say that the editorial team wants something now. Given it is a simple database that would most likely exist in a spreadsheet on someone’s desktop computer, the team could turn to Google Docs as an alternative. The spreadsheet service allows individuals from around the world to work on a common document and includes a number of collaboration features that would be difficult to custom develop. Best of all, it is available right away and at no cost.

Are real people using these services or is this just hype? Well, Amazon claims that over 300,000 developers use AWS, and I know of a couple of students at the University of Chicago who are using it to launch their Internet startup. As far as Google Docs are concerned, a project I’m co-leading, Project Bamboo (projectbamboo.org), is using both the spreadsheet and word processing capabilities to manage aspects of the project between Chicago and Berkeley. More broadly, we are exploring ideas of how services in the cloud can have a positive impact on arts and humanities research and teaching. Over 90 institutions and 350 faculty, librarians, administrators, technologists and researchers thought this was a good idea and joined the conversation in the last four months. Finally, I also know that a major humanities research system uses Google Docs for managing editorial changes in the environment. The editors are located around the globe and the custom software depends on services in the cloud for collaboration.

The cloud isn’t vapor anymore as it is condensing and starting to rain. The question is where will it go next?

Part 4

From networks and applications to computation and storage, technology clouds are condensing all around us. A key characteristic across all of these clouds is an emphasis on accomplishing work without concentrating on the technological and process details that underpin the software and infrastructure. This results in individuals selecting tools and using services that are most meaningful and important at the moment without much thought as to who might be providing them or where they might be located.

As clouds expand toward being the common technological paradigm that students, faculty and staff experience in their digital lives each day, pressure will be inevitably exerted on the face-to-face services a campus delivers to its community. The virtual life of on-demand and just-in-time services will simply collide with the physical world as people will care less about who provides services and more about obtaining the right services at the moments when they need them. Delivering services based on the administrative structures that define the business activities of an institution will seem archaic to those who simply want to get things done.

In recent years, there has been a push on campuses toward “student-centric services” and “one-stop shopping” for academic and administrative support. Service arcades, learning commons, and research hubs are all examples of the physical manifestations of this trend. One notion behind this is that by bringing these services together into a single space – much like a shopping mall – access will be simplified and users, whether students, faculty or staff, will have an easier time discovering and using the resources already available to them.

Co-locating functions for a particular community within a single environment is often seen as a single and terminal step, but really it is just the first part an ongoing transformation. The next part is to rethink the delivery of services and condense them into what people are coming to expect – something akin to a cloud where they don’t have to concern themselves with the inner workings of set of activities to get things done. 

Transformation on this level is not and will not be easy, nor will it occur quickly. However, as campuses further augment face-to-face services with self-service and online counterparts, the pace of change will quicken. Student-centric online experiences are lowering, if not eliminating the administrative barriers that exist in the physical world. Learning management systems, for example, trivialize the complex interactions among registrar and enrollment management services, identification and authorization infrastructures, content access and delivery systems, and managed collaboration services. From an online perspective where these services are blended together, the administrative distinctions that exist in the real world seem out of step with the online user experience.

Condensing services behind common experiences and cloud-like models, both physical and virtual, will raise tough questions about the structures that define and support those services. Moving beyond the notion of co-location and toward blended delivery requires rethinking responsibilities and resources, and may involve new partnerships, transferring control, or even letting go of service or brand identities. Blended delivery, like technology clouds, should push layered processes and structural complexity into the background, and provide a collection of easy to understand services to campus that the community can mold, mix and mash to meet their needs when most appropriate to them.

Regardless of the definition or domain, to the faculty, students, and staff who live in this cloud-filled world, all that matters is getting things done.

Discover more from Friday Sushi

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top