Data Science, Machine Learning & API / SOA: Insights and Best Practices
Ali_Arsanjani 120000D8QB Marcações:  artificialintelligence ml ai_ethics machinelearning ai machine_learning 1.626 Visualizações
Ali_Arsanjani 120000D8QB Marcações:  virtual_agent intent-based_architecture watson_conversation_servi... chatbot 2.021 Visualizações
Chatbots or Virtual Agents are rapidly ramping up to augment the human computer interaction starting from Self-help and gradaully moving up the Knowledge management chain to how an Agent (such as a technical support call center agent) uses a specialized chatbot inhouse for Agent Assist.
Some of these technologies are nascent, others more mature, such as Watson Virtual Agent.
The first thing that a virtual assistant does is to detect your intent: this is accomplished using Natural Language Understanding and Natural Language Classification. The process starts with a recognition of the goals of the personas interacting as well as being training on the most commonly recurring calls, requests, chat transcripts, etc.
What we are talking about here is more than Search or Text Analytics it is about Intent based understanding of content.
Search is more about Federation of search indices, Content level security, Content display facets, filtering, Community & Social extensions, Connectors ERP, DBs, etc.
Content Analytics on the other hand is about finding relevant words in text counting occurrences, Analytics for entity & associations extraction, Integration with analytics & prediction systems.
Intent-based Architectures allow you to understand the query down to the users intent, Execute interactive query refinement to be actionable, Generate a recommendation, Interactively access data with implied meaning & relationships Establish word / phrase proximity, document relationships.
All these capabilities are predicated upon the discovery of intent: intents that can be reused across personas in the interaction dialog .
An intent represents the purpose of a user's input. Each capability uses a natural language classifier that can evaluate a user utterance and find a predefined intent if it is present.
For example, a recommended practice is to review the intents that are identified most often just before users request to speak to a human agent. Investigating the causes of escalations can help you prioritize where to focus future training efforts. You can determine whether user inquiries are being misinterpreted, whether your service is missing a common intent altogether, whether the responses that are associated with an intent need improving, and so on.
Ali_Arsanjani 120000D8QB Marcações:  cognitive_systems traceable_ml governance traceability cognitive_system_governan... 2.882 Visualizações
In the age of voluminous unstructured data that can be curated manually, semi-automatically or eventually, with a fuller degree of automation with added approval processes and governance in place, it is more than ever necessary to secure not only the data, but secure the process around which data is curated, prepared, an ML algorithm(s) selected with resulting data being used trained for ingestion into ML algorithms, the curation of the data : data collection and aggregation sources (lineage), data preparation, including redundancy removal, null value replacement, range consolidation, homogeneity of content, etc.
Thus the age of Traceable Machine Learning Governance is born. Standards bodies should embark on this consolidation of views, across vendors, consumers, services organizations, data providers, curators and other stakeholders in the ML Training Life-cycle.
The problem of governance of ML is one of not only data governance as it passes through traceable, steps until ingested into an ML, but also of the process around the collection, curation, selection of ML and training set selections and partitioning.
So, here is our call to action:
1. Standards Bodies need to consolidate a standard around Traceable ML Governance to reduce the risk of fake training, bogus data used to train a "paper tiger ML", which has no substance.
2. Corporations should give serious consideration during the ML and Cognitive Computing Training Life-cycle to secure a Traceable ML and Cognitive Governance process. This process can begin in a lightweight fashion, but secure legal implications arising from the use and administration of Cognitive Systems using Machine Learning (ML) to make recommendations, provide insights, generate summaries, reports, news, reviews, etc.
3. Furthermore, as global impact of well-trained Cognitive Systems (ML systems, AIs) becomes more and more tangible, we will want to have demonstrable traceability on how these systems were trained, retrained and where the human in the loop influenced the AI and how it was brought to bear (SME, curation, etc) on the source data, where data was sourced, how it was curated (whether human, or other ML or Cognitive System). To do so, traceability in the Cognitive System life-cycle or the ML-training life-cycle will play a cardinal role in its adoption, trust and veracity of recommendations.
Many claims around Deep Learning being beyond governance or governability is a bogus claim founded only in the illusion that 'since the hidden layers of a Neural Network and their interactions are "dark matter" so we cannot govern their inputs and outputs.' In fact we can and should secure the training process, as well as the inputs and output configurations of ML systems in Logs that are themselves inspected by ML systems with a human in the loop approval process in place.
4. Organizations and Standard Bodies should consider the use of Blockchain technologies to secure and govern the process chain within the ML-training or Cognitive system life-cycle in a demonstrably traceable manner, which with little doubt will gradually find its way in local and global legislation.
Ali_Arsanjani 120000D8QB Marcações:  longitudinal time-sequenced machinelearning rnn lstm ml 2.562 Visualizações
Traditional artificial neural networks (ANN) have a memory problem : they cannot recall their previous reasoning about events to inform new ones.
If you have time-sequenced data, i.e., longitudinal data, you might want to consider using a Recurrent Neural Network (RNN) that allows information to flow from the past to the present using a feedback loop. Just as we experience the world and our sensors (senses) provide feedback to us constantly to reevaluate an unfolding situation.
Events are a connected series of vectors (or tensors) that have a way of bringing past experience to bear to the next step of the deliberation.
In recent years, RNN’s have enjoyed an “unreasonable success”, to quote Andrej Karpathy of Stanford University. These include speech recognition and categorization, language modeling and translation, image captioning and recognition at multiple points in a sequence of images, etc.
So RNNs add a feedback loop to the Neurons and allow information to flow without having a reset or restart everything they are executed.
Long Short-Term Memory (LSTM) is a variation of RNNs that have been most effective in processing these kinds of longitudinal data. We will further discuss them in a next entry.
Ali_Arsanjani 120000D8QB Marcações:  gradientboosting machinelearning ai training ml cognitive 2.376 Visualizações
Ali_Arsanjani 120000D8QB Marcações:  bigdata mlinfused combine_structured_unstru... machinelearning datascience 2.735 Visualizações
Context and problem. You have terabytes of structured data, petaBytes of unstructured data, that are not quite visible, and many areas are dark, they provide less valuable information. You have lots of dark data, but not a whole lot of insight. How can you not only see summaries and means and graphs of trends, but to gain insight, that you can count on to associate specific value-adding business activities and based off of them, make strategic business decisions. How can you shine light on all the dark data that is sitting in those ObjectStores, relational databases, datawarehouses, content stores, voice, images, text,....
Considerations. Leveraging data, through analytical processing is not just about processing structured data often residing in the organization's many systems of record or transaction-processing or even online analytics processing databases. It is rather about the ability to combine unstructured data coming in either from IoT devices, which are semi-structured data, and from content-based (think Enterprise Content Management (ECM)) systems that contain images, attachments, documents and text in free format.
Solution Path. Extracting data from unstructured content or text from images, transforming semi- and un-structured content into structured data, storing it in a DataLake and possibly into other Case Management systems, will allow you to start gathering the raw data you need to start your curation and data wrangling.
Then you can apply Data Science and Machine Learning to the curated Datalake to gain insights, and incorporate the conditional actions you wish to take as part of a BPM or Case Management solution. Alternatively you can wire a set of micro-services via APIs exposed from Software as a service vendors, to evaluate the insights and based on certain thresholds, invoke a workflow or display a result for human knowledge workers to take action.
Ali_Arsanjani 120000D8QB 2.640 Visualizações
Ali_Arsanjani 120000D8QB Marcações:  cognitive_systems ai ethics ai_ethics machine_learning data_curation augmented_intelligence 4.234 Visualizações
Most major businesses are embarking on augmenting their analytics capabilities with Machine Learning. They are either demonstrating or are planning projects that showcase how machine learning can be applied to their applications, with Machine Learning. (This is called #MLInfused.) Enterprise software development firms are attempting to bring impactful predictive capabilities into their suites of products. IBM Watson through either Bluemix APIs or IBM ML On prem offers such capabilities.
When a mortgage application is submitted, ultimately human underwriters make the decision based on a set of rules. Although we cannot claim that the process is completely understandable, we can probably hold a human plus the regulations accountable in an audit, court case or compliance situation to demonstrate lack of unjust discrimination or bias. Enter Machine Learning. The data we use to train such a system, the humans who will curate and annotate the data, and the process they went through (biased or unbiased) will have a significant if not cardinal impact on the trained ML algorithm and thus it's recommendations and output.
To prepare ourselves as a society of scientists, engineers, businesspersons and regulators, etc., for a world where such processes and data will have major impact on individuals and society, we need a set of rules and regulations. They will come in due time forced by errors and omissions and uproar. But to prempt that we suggest for consideration some "laws" or best practices around machine learning of cognitive systems.
1. A cognitive system will not be trained on dark curated data . Yes maybe the hidden layers of neural networks are a black box, but the data itself should not be "dark": it's sourcing, inputs, annotations, training workflow and outcomes should be transparent. Training, testing and validation data sets should be traceable, white boxed and accessible for enterprise governance and compliance .
2. The data curation process will be transparent. Ends do not justify means : transparency of data curation process for machine learning is paramount . This means traceability and governance around where data is sourced, who curated and annotated it, who verified the annotations .
3. Cognitive system recommendations must provide traceable justification . Outcomes must be coupled with references to why they made the decision or recommendation.
4. Where human health is at stake cognitive systems with relevant but different training backgrounds will cross check each other before making a recommendation to a human expert .
These practices or initial variations of them should be considered as part of an overall governance process for the training, curating of datasets for machine learning of cognitive systems.
Ali_Arsanjani 120000D8QB Marcações:  agile governance api microservices soa 1 Comentário 3.463 Visualizações
Fred Brooks has taught us in his Mythical man Month that if a project is going off track and running behind schedule, then adding people to the project will only make things worse: the ramp-up time to orient and on-board developers will have reached diminishing returns. We have have also seen this countless times on projects we have been involved.
But where is the root cause? Not what, but where. Say you have a multi-tier architecture, and the business logic tier or the service component layer, the layer or tier that implements the server side functionality. The backend logic is/can often be represented as some form of an object model, or class diagram. This object model is where the problem is.... Class diagrams or object models, represent the design that implement services in the services layer, which indeed do decrease complexity and risk and decouple systems and make life easier for everyone.
When a developer is onboarded they have to learn the complex convoluted backend dependencies of the business logic tier, essentially they need to digest the object model , the domain object model, the class diagram (or whichever other representation or name you choose to give it). Most of the time these dependencies are unnecessary and the rampup time increases because they feel they have to digest so much of what everyone else is doing.
So, partition the object model, the domain model, into a set of smaller (perhaps as close as you can get to 5-9 main classes) . These partitions of the domain model allow a natural separation of concerns and allow developers to ramp up much faster.
This has nothing to do with building microservices, or small grained services. In fact if you focus on smaller grained services only you will fall into the Service Proliferation trap.
It's about building tiny application not tiny services. Tiny applications can have medium or even one or more larger grained services.
The success of microapplications lies not in the granularity of the services they are composed of but on the size & complexity of their underlying domain model, which focus only on one functional area.
Remember, tiny domain models result in small code bases. Large code bases overwhelm developers, slow their development environment and increase dependencies and complexity, and make integration even harder.
Ali_Arsanjani 120000D8QB 3.398 Visualizações
SOA (service -oriented architecture) evolved to reflect our deepening of understanding in separating interface from implementation from deployment, not just pro grammatically, but architecturally and in a business impactful fashion. It introduced a new layer in the software application architecture focused on well, just services or really, service description or service contracts. Interfaces were merrily separating interface from implementation in the object-oriented world which we take for granted today. But this was limited to mostly a programming paradigm.
SOA elevated this separation of concerns between interfaces/contracts, implementations and deployments higher in the foodchain: to the level of an architectural construct.
As we were compelled to move implementations/deployment locations off premise, in view of the savings, flexibility (elasticity) and consolidating power of the cloud, we moved to created or using software as a service. Software as a service, or , SaaS is primarily a software licensing and delivery model that has grown out of the SOA worldor consolidate them in a private cloud. Software is licensed on a subscription or "pay as you go" basis and is hosted often as a managed service, but sometimes in a public , private or most often hosted scenario. Platform, Infrastructure and other XaaS kinds of software creatures started evolving out of the cosmic goo of the ubiquitous and elastic cloud model.
SOA has morphed into RESTful APIs as the implementation or realization mechanism of choice. When people decide to use whatever underlying technology they wish and choose whatever programming language or framework they want and deploy little applications that are pretty much standalone, they tend to use the term microservices. This is often a misnomer, since the granularity may have historically started from fine grained services, but has gained stability and a more balanced medium grain service stature.
So how do we apply SOA principles to the build Cloudready applications ? We build APIs. Here are some tried and tested recommended practices for successful API development.
API best-practice 1: Use Tiny object models for design. SOA, implemented, not, with one huge underlying object model, but rather with a partitioned set of smaller object models in its design phases, smaller models that you can divvy up and feed to smaller independently functioning teams has been particularly useful and an adopted best practice. What is tiny? 7 +-2.
API Best Practice 2 : Each Team manages their own Service Portfolio. Plan the services that you need in your functional area. Maintain a categorized list of services that may breakdown into smaller grained services. Try to keep them as stateless as possible.
API Best Practice 3: Each team owns a Functional Area. When you start breaking up the design object model into tiny object model with minimal dependencies, each of those parts better align to an area of the business : a functional area. These can be different departments or more finer grained divisions such as a shopping cart, a product catalog, an order, etc.
API Best Practice 5: Each team Deploys Independently . This is so they are not on the critical path of another team, can be testing and running relatively independently. There will be semantic integration, dependencies and connections that are necessary, for example a single sign on security or session token that may need to be passed.
API Best Practice 6: Each Team Builds the finest grained APIs possible. Build RESTful APIs for the leaf nodes and medium grained services in your Service Portfolio.
API Best Practice 7: Have an Integration team that governs the overall Service Portfolio, and all things integration. Allow API access to the database that each team requested (fields, tables and all, or go NoSQL if you wish). The Integration team monitors and manages the dependencies between the teams, the functional areas.
Ali_Arsanjani 120000D8QB Marcações:  bpm best-practices design_guide patterns 5.411 Visualizações
Hot off the press. Hot from the BPM Oven. http://www.redbooks.ibm.com/redbooks/pdfs/sg248282.pdf
As a Business Process Management (BPM) architect, you always seek to design business process management solutions that deliver on the promise of business agility and business-centric visibility, among other things, while also ensuring that your design meets the preferred practices for IT architecture, security, and solution design. Often, this task can be daunting because your BPM solution must bridge the gap between business teams and information technology teams. In this post, we introduce five basic concepts for designing an effective business process management solution using IBM Business Process Manager (IBM BPM).
1. Architecture considerations
It is important to understand the practical considerations involved in high-level solution analysis and architecture of an IBM BPM solution. Considerations such as these:
2. Security considerations
BPM processes contain the very essence of a business, even including things such as competitive advantages and trade secrets. Despite this importance, and despite the fact that security breaches make embarrassing headlines, IBM repeatedly finds customers who consider their BPM projects secured simply because the BPM servers are hidden behind firewalls. So there are a number of specific items an architect must consider when planning how BPM will be integrated into an enterprise’s IT environment, including:
3. Solution design considerations
Implementation of business processes using a business process management software can be challenging and, at times, overwhelming. As a BPM architect, you should seek an understanding of the following items as you embark on the solution design phase, prior to implementing your business processes in IBM BPM:
4. Business-centric visibility
Business-centric visibility is one of the most important considerations for organizations looking to adopt business process management. Business-centric visibility, in the context of a business process, includes these factors:
In order to achieve any meaningful business-centric visibility, it is important to have access to both the business data and the process data. Business-centric visibility is achieved in IBM BPM using these capabilities:
5. Performance tuning and IT centric visibility
Performance tuning and IT-centric visibility are also key aspects of the IBM BPM design. In the context of these aspects, you need to keep in mind the following important tips:
Ali_Arsanjani 120000D8QB Marcações:  bpm rest soa analytics microservices mobile cloud big_data api patterns 11.153 Visualizações
Microservices reminds us of very fine grained, tiny, micro-scopic services. Perhaps services that grow on a Petri dish.
But actually, they refer to a deployment pattern for services developed in a Service-oriented Architecture along the constraints of a Functional Area that owns its own Data, and does not allow access to its Information willy nilly. You start with a domain break it down into a Functional Area and get down to the subsystems and components living there, deep in the bowels of the legacy systems, the packages inhabiting the digital ooze in the sewers of legacy code , interspersed with some fresh sprinklings of new additions of components implementing newer services required by the business.
Stratified along a Functional Area, immune to the curious and invasive glances of strange components and outside systems, the service rests peacefully in isolated, independent deployment. Choosing to deploy to brew this SOA flavor with a twist of independent deployment, along the lines of a functional area, owning its information, is detailed out in the implementation of finer grained services in SOMA (Service-oriented modeling and architecture) ,many companys' service-oriented method.
Don't mistake this as the only way to choose to constrain SOA by design along functional area boundaries and deploy along those lines. This is only one brew of SOA.
Some considerations are the need to communicate between deployed services that are other wise feeling a little isolated and engaged heads down in their servicing of requested for their functional area of the business. But business have cross cutting concerns and dimensions, often exemplified by BPM or business process management. Cutting across silos or functional areas, the process needs access to various data sources and systems of record, albeit some are prehistoric gauging by enterprise standards.
Sp cross cross cuts of SOA flavor are best served by BPM. Smaller granularity focused functional area services by Microservices. What about when mobile needs to access something? Well for that we have the API brew. RESTful (Representational State Transfer) APIs provide an HTTP/S level of simplicity of access (yes, both get and set) that allow access to underlying functionality via mobile devices and yes Software-as-a-services, or Cloud as is known in slang.
So choose your brew of SOA according to your mood, according to what you would, contingent on what you should
be doing with your processes, data or business functions. Choose wisely for each have pluses and minuses; opportunities and consequences, like any pattern. Or brew.
Ali_Arsanjani 120000D8QB 3.992 Visualizações
What are API’s critical to enterprise business agility?
Today we see the importance of a confluence of mobile, cloud-based, service-oriented applications that were hitherto locked within the enterprise break free and start interacting with an ecosystem of partners opportunistically in order to create value on emerging opportunities especially those provided by mobile devices, social business, big data and analytics and cloud-based services. It is important to understand the evolution of APIs in order to better capitalize on opportunities presented by the economical dynamics of an ecosystem connecting and communicating through APIs.
The evolution of APIs
API’s or Application programming Interfaces were initially thought to be libraries of reusable code. They were focused very specifically on providing functionality that was written one often to support a complicated set of granular operations such as a graphics library, telecommunications library, security etc.
In the first decade of the 21st century, application programming interfaces morphed because of service-oriented architecture. The protocols used to implement Web services in a service-oriented architecture were numerous. The simple object access protocol or soap was one means of access. This sort of replaced RMI over IIOP in a distributed computing world.
Service-oriented architecture taught us that not only does interface have to be separated from implementation but also that implementation can be separated out into several layers. One layer is a more abstract specification of where the endpoint for the service implementation may reside. Often an enterprise service bus was used to virtualized the endpoint so that the optimal endpoint could be selected based upon configurations or input parameters or more pragmatic considerations pertaining to security, scalability or performance.
Implementation was separated into a realization decision, And a deployment set of options. The realization decision was primarily governed by the question: “how am I going to implement the service? Which component is going to be used to implement the service functionality?” The component could be anywhere from a.net component, and enterprise JavaBean or a legacy application interface or even a package application. The deployment option included not only the protocol by which the implementation would be realized but also the configuration options pertaining to the infrastructure or platform used to ultimately operationalize the solution.
Representational State transfer or REST was a protocol created to support a very lightweight mechanism that would replace the more complicated SOAP protocol and therefore could use HTTP or HTTPS. Therefore the verbs that could be used where the familiar get and put actions familiar to web programmers.
Restful APIs were APIs that extended enterprise capabilities hitherto reserved for webpages into the ecosystem. This ecosystem of partners who were now able to interact using restful APIs, created the opportunity for an API economy.
The characteristics of API-based services
At the core of the enterprise the concept of service-oriented architecture suggested the creation of the service model. Within it is a service portfolio that is created, governed and managed as an enterprise asset. This asset , the Service Portfolio represents the implementation of business capabilities that the enterprise would like to expose experience brings of partnership and security either within the enterprise but also extending towards the edge of the enterprise and more selectively, beyond it, to business partners.
In summary characteristics of the API-based services provide the enterprise the ability to :
· Externalize Business Capabilities for ecosystem consumption
· Capitalization and monetization of Enterprise Assets outside the organization
· Extend value of Enterprise capabilities beyond the enterprise
· Secure access to internal enterprise resources and capabilities
· Enable channel-agnostic connectivity with business partners and consumers
Internally, the APIs will be part of the enterprise portfolio that can provide elimination of redundancy through reuse of capabilities as well as providing governance through consistency of created assets.
Externally, exposure of internal capabilities to the partner ecosystem will allow value-added services to enhance existing services provided, bringing in a new set of clients.
Pitfalls and Best-practices
Lack of alignment of an API to a clearly defined set of business goals can be detrimental to the adoption of the technology and capabilities behind the API.
APIs should not focus on time sensitive technologies or merely on fulfilling user interface capabilities that access backend systems. An example of this is to limit the capabilities of an API to CR UD-based operations such as create, read update and delete. This severely limits the capabilities of expansion of the API into higher levels of maturity of an application lifecycle.
Exposure of architectural decisions that stem from interorganizational considerations that in turn have sprung up over time as a result of the attempt to maintain the big ball of mod that constitutes the legacy systems that have been developed over generations within the organization. One best practice is to mask the convoluted interdependencies on technologies, architectural decisions that were made due to expediency and legacy platforms and the scars of partially successful mergers. The API should not expose such backend architectural decisions and implementation considerations just as any good service in a service-oriented architecture would mask these backend assumptions that have no rationale beyond the walls of the enterprise.
Secure a path of growth so your applications can start evolving from merely accessing backend legacy systems to performing transactions, initiating business processes and engaging in decision management.
Designing for the glass. Designing mobile applications is a current fad which is driven by the strong adoption of mobile devices. This can be a severely limiting factor is the application is designed from the glass in words in other words consideration of the backend business processes must be taken into consideration along with considerations of usability at the screen level for the mobile device of choice.
The ability to orchestrate and compose APIs in order to provide a mediated or orchestrated experience of the business process is a critical success factor with the maturation and increased adoption of APIs within your ecosystem.
Ali_Arsanjani 120000D8QB 3.841 Visualizações
Context: Industry adoption of development, life-cycle and architectural styles such as SOA tend to be imported into organizations and then move from an evangelist, grass roots kind of effort into a next phase of organizational maturity which tends to impose rigid constraints on the life-cycle, standards, governance and development of the architectural style. SOA is an example.
Problem(s): High level , holistic design and architectural work is needed to provide contractual visibility to clients. Architectural styles (OO, CBD, SOA etc) are co-mingled with the rigor of the top-down governance that is seen to be required, or they are sacrificed for rapid, quick and dirty, "let's not look around the bend until we get to it" approach.
Forces: The developers tend to push back to the imposition of increasing accountability and governance in general, including standardized development practices. The organization , in an effort to be accountable to the business, requires increased tracking, metrics and accountability. Developers will continue to resist increased accountability due to the perceived imposition on their time that is deemed less valuable than the actual development of the product. Against this, accountability and project visibility, allowing projections in order to fulfill "promises" (e.g., contracts, SLAs, etc) with clients (whether internal i.em business as a client, or external, as in other organizations that services/products are provided to).
(Re)solution: Combine a light top down governance and standardized software development process/method that looks holistically but does not require detail to the n-th degree prior to engaging in a project or program, coupled with agile iterative sprints.
Consequences: High level visibility and accountability are attained with some compromise with the "let's just do it" approach. Development is not impeded with heavyweight constraints. Promises to the business or external clients have a reduced risk of non-completion associated with them.
Ali_Arsanjani 120000D8QB 4.124 Visualizações
SOA , as an architectural style and a discipline of software engineering brings business and IT to the same table by providing a portfolio of business intended services/capabilities as the common terminology that he business requires and IT will design, implement and maintain. SOA is focused on establishing a center of operations for design around a portfolio of services, which equate to a set of business capabilities, that the business looks to deliver. Integration has become a strong focus for SOA as well. There are various types of services, integration and business services to name just two categories.
In the traditional Application Programming Interfaces or APIs were merely a library of interfaces that components can access and communicate with each other. In an evolved definition that is becoming more and more prevalent, the notion of a RESTful API that is exposed outside an organization, that may expose a set of services in the backend is being promoted more widely, especially with the proliferation of interest in HTTP-based simple interfaces that can mobile enable an enterprise's subset of back end functionality that it chooses to expose.
Cloud services are services that are exposed externally, and can be consumed by all, often implicitly assumed to be outside an organization's boundaries just as an API or more precisely, a RESTful API. Cloud services may span the gamut of Infrastructure as a service (IaaS) all the way to Software as a service (SaaS) which may expose very large grain applications or smaller levels of granularity akin to the RESTful APIs we discussed above.
More detail on this later.