Below the buzz, underlying the hype will be some of the concrete real world let's really use #RealAI to solve real world problems, versus the #ToyAI phenomenon.
1. #AugmentedIntelligence will appear to be more prominent than "artificial intelligence". The emphasis will primarily be on how to augment human in the loop activities rather than replacing humans. Augmentation will come in the form of automation of the road and repetitive activities such as those prescribed an intelligent automation.
2. #AIEthics of Training AI systems will come to the forefront of controversies and bnegin to resolve themselves. Training on bogus data will produce a bogus artificial neural network. Therefore the veracity and traceability and governance around where we have obtained the training sets the validation sets in the testing sets for machine learning and deep learning will come to the forefront as key issues for artificial intelligence governance.
3. #AIStandards for interoperability between AI vendors, intermediate models of knowledge and training representation and output will dominate the issues clients will have in using AI systems vs AI in the abstract. It's the wild West right now with most companies having unique formats and representations for inputs and outputs and models. Such as they did in the air of middleware databases application programming interfaces that they need to remain vendor neutral. Most AI systems or application programming interfaces are somewhat vendor dependent even if they're open sourced.Transitional stages in the machine learning lifecycle as well as subsets of knowledge sets will be used as handoff points between points in the lifecycle for machine learning and artificial intelligence will be increasingly in demand. Vendors will need to provide persistent representation of knowledge set outputs among their own tools and APIs as well as between vendors.
4. #InfuseIntelligence. Augmented Intelligent Systems will not appear as an 'n'-th state of maturity, but will be infused into the spectrum of legacy to more modern (read "immediate") AI and ML applications. Many representations show artificial intelligence has an end state or a more mature state. In fact artificial intelligence needs to be infused at every stage of the software engineering lifecycle. Machine learning and augmented intelligence manifests as the ability to infuse intelligence into existing phases, states end processes.
5. #Robotics in its virtual (read RPA, #IntelligentAutomation) and physical forms (spectrum from individual robot helpers to #EmbodiedCognition in a room or building) will become mainstream after undergoing the above four churns.
6. Data and Content #AutomatedCuration will become more practical, tools will be developed and Data Sets be more cleaned and curated and readily available.
7. #AITrainedHealthcare will be challenged by the Arsanjani Laws for Cognitive Systems and will require cross checking between AI's trained on different IntelligenceSets and KnowledgeSets before recommendations start to take firmer hold.
8. Research and funding for using #BiasIndicatingAlgorithms will expand as pressures on #AIEthics mount and more content will be publish that will include a #confidenceFactorForNonValidatedClaims, to decrease #fakenews #alternativefacts #rumors. This will be a new #AIBattleground
9. #AI will be used increasingly for political clout in a covert and overt fashion to engage in traditional fields of engagement. #AIEthics
10. #AIEducation and #DataScienceEducation will continue exponential expansion and popularity, it will be infused into traditional software emngineering ciricula as Academic Institutions see the value of integrating BigData, Data Science, ML and AI with traditional Software Engineering curricula.
#AI #DeepLearning #ML #MachineLearning #ArtificialIntelligence #RPA #IntelligentAutomation #Chatbots #VirtualAssistants #Robotics #EmbodiedCognition #DataScience #BigData #Cognitive #CognitiveComputing
Modified on by Ali_Arsanjani 120000D8QB
|
Chatbots or Virtual Agents are rapidly ramping up to augment the human computer interaction starting from Self-help and gradaully moving up the Knowledge management chain to how an Agent (such as a technical support call center agent) uses a specialized chatbot inhouse for Agent Assist.
Some of these technologies are nascent, others more mature, such as Watson Virtual Agent.
The first thing that a virtual assistant does is to detect your intent: this is accomplished using Natural Language Understanding and Natural Language Classification. The process starts with a recognition of the goals of the personas interacting as well as being training on the most commonly recurring calls, requests, chat transcripts, etc.
What we are talking about here is more than Search or Text Analytics it is about Intent based understanding of content.
Search is more about Federation of search indices, Content level security, Content display facets, filtering, Community & Social extensions, Connectors ERP, DBs, etc.
Content Analytics on the other hand is about finding relevant words in text counting occurrences, Analytics for entity & associations extraction, Integration with analytics & prediction systems.
Intent-based Architectures allow you to understand the query down to the users intent, Execute interactive query refinement to be actionable, Generate a recommendation, Interactively access data with implied meaning & relationships Establish word / phrase proximity, document relationships.
All these capabilities are predicated upon the discovery of intent: intents that can be reused across personas in the interaction dialog .
An intent represents the purpose of a user's input. Each capability uses a natural language classifier that can evaluate a user utterance and find a predefined intent if it is present.
For example, a recommended practice is to review the intents that are identified most often just before users request to speak to a human agent. Investigating the causes of escalations can help you prioritize where to focus future training efforts. You can determine whether user inquiries are being misinterpreted, whether your service is missing a common intent altogether, whether the responses that are associated with an intent need improving, and so on.
|
In the age of voluminous unstructured data that can be curated manually, semi-automatically or eventually, with a fuller degree of automation with added approval processes and governance in place, it is more than ever necessary to secure not only the data, but secure the process around which data is curated, prepared, an ML algorithm(s) selected with resulting data being used trained for ingestion into ML algorithms, the curation of the data : data collection and aggregation sources (lineage), data preparation, including redundancy removal, null value replacement, range consolidation, homogeneity of content, etc.
Thus the age of Traceable Machine Learning Governance is born. Standards bodies should embark on this consolidation of views, across vendors, consumers, services organizations, data providers, curators and other stakeholders in the ML Training Life-cycle.
The problem of governance of ML is one of not only data governance as it passes through traceable, steps until ingested into an ML, but also of the process around the collection, curation, selection of ML and training set selections and partitioning.
So, here is our call to action:
1. Standards Bodies need to consolidate a standard around Traceable ML Governance to reduce the risk of fake training, bogus data used to train a "paper tiger ML", which has no substance.
2. Corporations should give serious consideration during the ML and Cognitive Computing Training Life-cycle to secure a Traceable ML and Cognitive Governance process. This process can begin in a lightweight fashion, but secure legal implications arising from the use and administration of Cognitive Systems using Machine Learning (ML) to make recommendations, provide insights, generate summaries, reports, news, reviews, etc.
3. Furthermore, as global impact of well-trained Cognitive Systems (ML systems, AIs) becomes more and more tangible, we will want to have demonstrable traceability on how these systems were trained, retrained and where the human in the loop influenced the AI and how it was brought to bear (SME, curation, etc) on the source data, where data was sourced, how it was curated (whether human, or other ML or Cognitive System). To do so, traceability in the Cognitive System life-cycle or the ML-training life-cycle will play a cardinal role in its adoption, trust and veracity of recommendations.
Many claims around Deep Learning being beyond governance or governability is a bogus claim founded only in the illusion that 'since the hidden layers of a Neural Network and their interactions are "dark matter" so we cannot govern their inputs and outputs.' In fact we can and should secure the training process, as well as the inputs and output configurations of ML systems in Logs that are themselves inspected by ML systems with a human in the loop approval process in place.
4. Organizations and Standard Bodies should consider the use of Blockchain technologies to secure and govern the process chain within the ML-training or Cognitive system life-cycle in a demonstrably traceable manner, which with little doubt will gradually find its way in local and global legislation.
|
Traditional artificial neural networks (ANN) have a memory problem : they cannot recall their previous reasoning about events to inform new ones.
If you have time-sequenced data, i.e., longitudinal data, you might want to consider using a Recurrent Neural Network (RNN) that allows information to flow from the past to the present using a feedback loop. Just as we experience the world and our sensors (senses) provide feedback to us constantly to reevaluate an unfolding situation.
Events are a connected series of vectors (or tensors) that have a way of bringing past experience to bear to the next step of the deliberation.
In recent years, RNN’s have enjoyed an “unreasonable success”, to quote Andrej Karpathy of Stanford University. These include speech recognition and categorization, language modeling and translation, image captioning and recognition at multiple points in a sequence of images, etc.
So RNNs add a feedback loop to the Neurons and allow information to flow without having a reset or restart everything they are executed.
Long Short-Term Memory (LSTM) is a variation of RNNs that have been most effective in processing these kinds of longitudinal data. We will further discuss them in a next entry.
|
Context & Solution. In case you were wondering what ML algorithms should be used when you have particularly sparse structured data, that is, longitudinal transactional data, you might want to consider Gradient Boosting.
Tradeoffs. You may need to go beyond traditional Gradient Boosting, and venture into ways of managing and reducing overfitting or "ensembling". To do this we can use some parameter tuning techniques. Some vendors have opensourced their algorithmic content and provide proprietary algorithms to do so.
<these series are designed to be concise snippets>
|
Context and problem. You have terabytes of structured data, petaBytes of unstructured data, that are not quite visible, and many areas are dark, they provide less valuable information. You have lots of dark data, but not a whole lot of insight. How can you not only see summaries and means and graphs of trends, but to gain insight, that you can count on to associate specific value-adding business activities and based off of them, make strategic business decisions. How can you shine light on all the dark data that is sitting in those ObjectStores, relational databases, datawarehouses, content stores, voice, images, text,....
Considerations. Leveraging data, through analytical processing is not just about processing structured data often residing in the organization's many systems of record or transaction-processing or even online analytics processing databases. It is rather about the ability to combine unstructured data coming in either from IoT devices, which are semi-structured data, and from content-based (think Enterprise Content Management (ECM)) systems that contain images, attachments, documents and text in free format.
Solution Path. Extracting data from unstructured content or text from images, transforming semi- and un-structured content into structured data, storing it in a DataLake and possibly into other Case Management systems, will allow you to start gathering the raw data you need to start your curation and data wrangling.
Then you can apply Data Science and Machine Learning to the curated Datalake to gain insights, and incorporate the conditional actions you wish to take as part of a BPM or Case Management solution. Alternatively you can wire a set of micro-services via APIs exposed from Software as a service vendors, to evaluate the insights and based on certain thresholds, invoke a workflow or display a result for human knowledge workers to take action.
|
Content, or unstructured Data, has a lifecycle. These are the typical states that content goes through. I will elaborate each one in another entry.

|
Most major businesses are embarking on augmenting their analytics capabilities with Machine Learning. They are either demonstrating or are planning projects that showcase how machine learning can be applied to their applications, with Machine Learning. (This is called #MLInfused.) Enterprise software development firms are attempting to bring impactful predictive capabilities into their suites of products. IBM Watson through either Bluemix APIs or IBM ML On prem offers such capabilities.
When a mortgage application is submitted, ultimately human underwriters make the decision based on a set of rules. Although we cannot claim that the process is completely understandable, we can probably hold a human plus the regulations accountable in an audit, court case or compliance situation to demonstrate lack of unjust discrimination or bias. Enter Machine Learning. The data we use to train such a system, the humans who will curate and annotate the data, and the process they went through (biased or unbiased) will have a significant if not cardinal impact on the trained ML algorithm and thus it's recommendations and output.
To prepare ourselves as a society of scientists, engineers, businesspersons and regulators, etc., for a world where such processes and data will have major impact on individuals and society, we need a set of rules and regulations. They will come in due time forced by errors and omissions and uproar. But to prempt that we suggest for consideration some "laws" or best practices around machine learning of cognitive systems.
1. A cognitive system will not be trained on dark curated data . Yes maybe the hidden layers of neural networks are a black box, but the data itself should not be "dark": it's sourcing, inputs, annotations, training workflow and outcomes should be transparent. Training, testing and validation data sets should be traceable, white boxed and accessible for enterprise governance and compliance .
2. The data curation process will be transparent. Ends do not justify means : transparency of data curation process for machine learning is paramount . This means traceability and governance around where data is sourced, who curated and annotated it, who verified the annotations .
3. Cognitive system recommendations must provide traceable justification . Outcomes must be coupled with references to why they made the decision or recommendation.
4. Where human health is at stake cognitive systems with relevant but different training backgrounds will cross check each other before making a recommendation to a human expert .
These practices or initial variations of them should be considered as part of an overall governance process for the training, curating of datasets for machine learning of cognitive systems.
Modified on by Ali_Arsanjani 120000D8QB
|
Fred Brooks has taught us in his Mythical man Month that if a project is going off track and running behind schedule, then adding people to the project will only make things worse: the ramp-up time to orient and on-board developers will have reached diminishing returns. We have have also seen this countless times on projects we have been involved.
But where is the root cause? Not what, but where. Say you have a multi-tier architecture, and the business logic tier or the service component layer, the layer or tier that implements the server side functionality. The backend logic is/can often be represented as some form of an object model, or class diagram. This object model is where the problem is.... Class diagrams or object models, represent the design that implement services in the services layer, which indeed do decrease complexity and risk and decouple systems and make life easier for everyone.
When a developer is onboarded they have to learn the complex convoluted backend dependencies of the business logic tier, essentially they need to digest the object model , the domain object model, the class diagram (or whichever other representation or name you choose to give it). Most of the time these dependencies are unnecessary and the rampup time increases because they feel they have to digest so much of what everyone else is doing.
So, partition the object model, the domain model, into a set of smaller (perhaps as close as you can get to 5-9 main classes) . These partitions of the domain model allow a natural separation of concerns and allow developers to ramp up much faster.
- Break the monolithic domain model into tiny domain models -- manage the complexity of the underlying design model.
- Partition your domain model by functional area, each functional area should have its own small, manageable agile domain model.
- Tiny domain models result in small code bases
- Create loose coupling between the sub-domains= functional areas.
- Assign teams to these functional areas.
- Compile a list of Service APIs they will offer,
- Make them as stateless as possible but no statelesser.
- Focus one team on integration
This has nothing to do with building microservices, or small grained services. In fact if you focus on smaller grained services only you will fall into the Service Proliferation trap.
It's about building tiny application not tiny services. Tiny applications can have medium or even one or more larger grained services.
The success of microapplications lies not in the granularity of the services they are composed of but on the size & complexity of their underlying domain model, which focus only on one functional area.
Remember, tiny domain models result in small code bases. Large code bases overwhelm developers, slow their development environment and increase dependencies and complexity, and make integration even harder.
Modified on by Ali_Arsanjani 120000D8QB
|
SOA (service -oriented architecture) evolved to reflect our deepening of understanding in separating interface from implementation from deployment, not just pro grammatically, but architecturally and in a business impactful fashion. It introduced a new layer in the software application architecture focused on well, just services or really, service description or service contracts. Interfaces were merrily separating interface from implementation in the object-oriented world which we take for granted today. But this was limited to mostly a programming paradigm.
SOA elevated this separation of concerns between interfaces/contracts, implementations and deployments higher in the foodchain: to the level of an architectural construct.
See the SOA Reference Architecture from The Open Group as an example.
As we were compelled to move implementations/deployment locations off premise, in view of the savings, flexibility (elasticity) and consolidating power of the cloud, we moved to created or using software as a service. Software as a service, or , SaaS is primarily a software licensing and delivery model that has grown out of the SOA worldor consolidate them in a private cloud. Software is licensed on a subscription or "pay as you go" basis and is hosted often as a managed service, but sometimes in a public , private or most often hosted scenario. Platform, Infrastructure and other XaaS kinds of software creatures started evolving out of the cosmic goo of the ubiquitous and elastic cloud model.
SOA has morphed into RESTful APIs as the implementation or realization mechanism of choice. When people decide to use whatever underlying technology they wish and choose whatever programming language or framework they want and deploy little applications that are pretty much standalone, they tend to use the term microservices. This is often a misnomer, since the granularity may have historically started from fine grained services, but has gained stability and a more balanced medium grain service stature.
So how do we apply SOA principles to the build Cloudready applications ? We build APIs. Here are some tried and tested recommended practices for successful API development.
API best-practice 1: Use Tiny object models for design. SOA, implemented, not, with one huge underlying object model, but rather with a partitioned set of smaller object models in its design phases, smaller models that you can divvy up and feed to smaller independently functioning teams has been particularly useful and an adopted best practice. What is tiny? 7 +-2.
API Best Practice 2 : Each Team manages their own Service Portfolio. Plan the services that you need in your functional area. Maintain a categorized list of services that may breakdown into smaller grained services. Try to keep them as stateless as possible.
API Best Practice 3: Each team owns a Functional Area. When you start breaking up the design object model into tiny object model with minimal dependencies, each of those parts better align to an area of the business : a functional area. These can be different departments or more finer grained divisions such as a shopping cart, a product catalog, an order, etc.
API Best Practice 4: Each team uses their programming language and development framework of choice. Skills should be concentrated in teams that work well together, developers of like feather often flock together: Python, Java, JavaScript, Ruby of Rails etc. They each come with their own frameworks, Django, Node.js, etc etc
API Best Practice 5: Each team Deploys Independently . This is so they are not on the critical path of another team, can be testing and running relatively independently. There will be semantic integration, dependencies and connections that are necessary, for example a single sign on security or session token that may need to be passed.
API Best Practice 6: Each Team Builds the finest grained APIs possible. Build RESTful APIs for the leaf nodes and medium grained services in your Service Portfolio.
API Best Practice 7: Have an Integration team that governs the overall Service Portfolio, and all things integration. Allow API access to the database that each team requested (fields, tables and all, or go NoSQL if you wish). The Integration team monitors and manages the dependencies between the teams, the functional areas.
Modified on by Ali_Arsanjani 120000D8QB
|
Hot off the press. Hot from the BPM Oven. http://www.redbooks.ibm.com/redbooks/pdfs/sg248282.pdf
As a Business Process Management (BPM) architect, you always seek to design business process management solutions that deliver on the promise of business agility and business-centric visibility, among other things, while also ensuring that your design meets the preferred practices for IT architecture, security, and solution design. Often, this task can be daunting because your BPM solution must bridge the gap between business teams and information technology teams. In this post, we introduce five basic concepts for designing an effective business process management solution using IBM Business Process Manager (IBM BPM).
1. Architecture considerations
It is important to understand the practical considerations involved in high-level solution analysis and architecture of an IBM BPM solution. Considerations such as these:
-
In the context of the application architecture:
-
Choosing between IBM BPM Standard and IBM BPM Advanced
-
Choosing between IBM BPM Advanced (ESB) and IBM Integration Bus
-
Deciding between using IBM BPM Coaches or building custom user interfaces that will be hosted outside of IBM BPM
-
Knowing the difference between Top Down and Bottom Up design approaches when integrating services (using IBM BPM Advanced) into your human centric business processes
-
Deciding between hosting your business process rules in IBM BPM or in IBM Operational Decision Manager
-
In context of an external system of record:
-
Important tips to keep in mind when architecting your process application persistence layer with IBM BPM
-
In context of an integration architecture:
-
The impact of the desired service integration maturity level on your integration design and on the IBM product(s) that you select to meet your integration requirements
-
In context of an infrastructure architecture:
-
Key items that impact your infrastructure architecture
2. Security considerations
BPM processes contain the very essence of a business, even including things such as competitive advantages and trade secrets. Despite this importance, and despite the fact that security breaches make embarrassing headlines, IBM repeatedly finds customers who consider their BPM projects secured simply because the BPM servers are hidden behind firewalls. So there are a number of specific items an architect must consider when planning how BPM will be integrated into an enterprise’s IT environment, including:
-
The mechanism by which will users be authenticated to BPM
-
The strategy BPM will employ to decide which tasks a given user is authorized to execute
-
The specific security hardening tasks that must be undertaken in each environment
3. Solution design considerations
Implementation of business processes using a business process management software can be challenging and, at times, overwhelming. As a BPM architect, you should seek an understanding of the following items as you embark on the solution design phase, prior to implementing your business processes in IBM BPM:
-
Tips to keep in mind before you undertake the product installation initiative
-
Concepts of structured and non-structured processes
-
Different process flow patterns and anti-patterns
-
Different activity routing patterns
-
Different patterns for the management of data that flows through the business process activities
-
Different service orchestration and choreography implementation types
-
Differences between mediation flow component and short-running Business Process Execution Language
-
Use cases for Advanced Integration Service
-
Different considerations when building toolkits
-
Error handling concepts and strategies
4. Business-centric visibility
Business-centric visibility is one of the most important considerations for organizations looking to adopt business process management. Business-centric visibility, in the context of a business process, includes these factors:
-
Ability to view the business data to make informed business decisions.
-
Ability to view the process data to get a high-level performance overview of a business process, to make improvements to a business process, and to audit business processes in order to identify any violations to stated business policies.
In order to achieve any meaningful business-centric visibility, it is important to have access to both the business data and the process data. Business-centric visibility is achieved in IBM BPM using these capabilities:
-
Business and process data reports
-
IBM BPM Process Optimizer
-
IBM Business Monitor
5. Performance tuning and IT centric visibility
Performance tuning and IT-centric visibility are also key aspects of the IBM BPM design. In the context of these aspects, you need to keep in mind the following important tips:
-
Performance tuning is an ongoing activity in IBM BPM. Hence, it is important to document the performance tuning process and use it as a reference for ongoing tuning activities.
-
Know the different performance diagnostic tools that can be used with IBM BPM and understand how these tools can be used to identify and resolve performance bottlenecks.
-
IT-centric visibility is a capability that provides an overall view of what is happening in a runtime solution. It provides the ability to:
-
Audit systems for compliance
-
Ensure that services meet the defined service level agreements
-
Pro-actively monitor the health of systems, subsystems, and applications
-
Diagnose and resolve application issues as soon as possible
|
Microservices reminds us of very fine grained, tiny, micro-scopic services. Perhaps services that grow on a Petri dish.
But actually, they refer to a deployment pattern for services developed in a Service-oriented Architecture along the constraints of a Functional Area that owns its own Data, and does not allow access to its Information willy nilly. You start with a domain break it down into a Functional Area and get down to the subsystems and components living there, deep in the bowels of the legacy systems, the packages inhabiting the digital ooze in the sewers of legacy code , interspersed with some fresh sprinklings of new additions of components implementing newer services required by the business.
Stratified along a Functional Area, immune to the curious and invasive glances of strange components and outside systems, the service rests peacefully in isolated, independent deployment. Choosing to deploy to brew this SOA flavor with a twist of independent deployment, along the lines of a functional area, owning its information, is detailed out in the implementation of finer grained services in SOMA (Service-oriented modeling and architecture) ,many companys' service-oriented method.
Don't mistake this as the only way to choose to constrain SOA by design along functional area boundaries and deploy along those lines. This is only one brew of SOA.
Some considerations are the need to communicate between deployed services that are other wise feeling a little isolated and engaged heads down in their servicing of requested for their functional area of the business. But business have cross cutting concerns and dimensions, often exemplified by BPM or business process management. Cutting across silos or functional areas, the process needs access to various data sources and systems of record, albeit some are prehistoric gauging by enterprise standards.
Sp cross cross cuts of SOA flavor are best served by BPM. Smaller granularity focused functional area services by Microservices. What about when mobile needs to access something? Well for that we have the API brew. RESTful (Representational State Transfer) APIs provide an HTTP/S level of simplicity of access (yes, both get and set) that allow access to underlying functionality via mobile devices and yes Software-as-a-services, or Cloud as is known in slang.
So choose your brew of SOA according to your mood, according to what you would, contingent on what you should
be doing with your processes, data or business functions. Choose wisely for each have pluses and minuses; opportunities and consequences, like any pattern. Or brew.
Modified on by Ali_Arsanjani 120000D8QB
|
What are API’s critical to enterprise business agility?
Today we see the importance of a confluence of mobile, cloud-based, service-oriented applications that were hitherto locked within the enterprise break free and start interacting with an ecosystem of partners opportunistically in order to create value on emerging opportunities especially those provided by mobile devices, social business, big data and analytics and cloud-based services. It is important to understand the evolution of APIs in order to better capitalize on opportunities presented by the economical dynamics of an ecosystem connecting and communicating through APIs.
The evolution of APIs
API’s or Application programming Interfaces were initially thought to be libraries of reusable code. They were focused very specifically on providing functionality that was written one often to support a complicated set of granular operations such as a graphics library, telecommunications library, security etc.
In the first decade of the 21st century, application programming interfaces morphed because of service-oriented architecture. The protocols used to implement Web services in a service-oriented architecture were numerous. The simple object access protocol or soap was one means of access. This sort of replaced RMI over IIOP in a distributed computing world.
Service-oriented architecture taught us that not only does interface have to be separated from implementation but also that implementation can be separated out into several layers. One layer is a more abstract specification of where the endpoint for the service implementation may reside. Often an enterprise service bus was used to virtualized the endpoint so that the optimal endpoint could be selected based upon configurations or input parameters or more pragmatic considerations pertaining to security, scalability or performance.
Implementation was separated into a realization decision, And a deployment set of options. The realization decision was primarily governed by the question: “how am I going to implement the service? Which component is going to be used to implement the service functionality?” The component could be anywhere from a.net component, and enterprise JavaBean or a legacy application interface or even a package application. The deployment option included not only the protocol by which the implementation would be realized but also the configuration options pertaining to the infrastructure or platform used to ultimately operationalize the solution.
Representational State transfer or REST was a protocol created to support a very lightweight mechanism that would replace the more complicated SOAP protocol and therefore could use HTTP or HTTPS. Therefore the verbs that could be used where the familiar get and put actions familiar to web programmers.
Restful APIs were APIs that extended enterprise capabilities hitherto reserved for webpages into the ecosystem. This ecosystem of partners who were now able to interact using restful APIs, created the opportunity for an API economy.
The characteristics of API-based services
At the core of the enterprise the concept of service-oriented architecture suggested the creation of the service model. Within it is a service portfolio that is created, governed and managed as an enterprise asset. This asset , the Service Portfolio represents the implementation of business capabilities that the enterprise would like to expose experience brings of partnership and security either within the enterprise but also extending towards the edge of the enterprise and more selectively, beyond it, to business partners.
In summary characteristics of the API-based services provide the enterprise the ability to :
· Externalize Business Capabilities for ecosystem consumption
· Capitalization and monetization of Enterprise Assets outside the organization
· Extend value of Enterprise capabilities beyond the enterprise
· Secure access to internal enterprise resources and capabilities
· Enable channel-agnostic connectivity with business partners and consumers
Internally, the APIs will be part of the enterprise portfolio that can provide elimination of redundancy through reuse of capabilities as well as providing governance through consistency of created assets.
Externally, exposure of internal capabilities to the partner ecosystem will allow value-added services to enhance existing services provided, bringing in a new set of clients.
Pitfalls and Best-practices
Lack of alignment of an API to a clearly defined set of business goals can be detrimental to the adoption of the technology and capabilities behind the API.
APIs should not focus on time sensitive technologies or merely on fulfilling user interface capabilities that access backend systems. An example of this is to limit the capabilities of an API to CR UD-based operations such as create, read update and delete. This severely limits the capabilities of expansion of the API into higher levels of maturity of an application lifecycle.
Exposure of architectural decisions that stem from interorganizational considerations that in turn have sprung up over time as a result of the attempt to maintain the big ball of mod that constitutes the legacy systems that have been developed over generations within the organization. One best practice is to mask the convoluted interdependencies on technologies, architectural decisions that were made due to expediency and legacy platforms and the scars of partially successful mergers. The API should not expose such backend architectural decisions and implementation considerations just as any good service in a service-oriented architecture would mask these backend assumptions that have no rationale beyond the walls of the enterprise.
Secure a path of growth so your applications can start evolving from merely accessing backend legacy systems to performing transactions, initiating business processes and engaging in decision management.
Designing for the glass. Designing mobile applications is a current fad which is driven by the strong adoption of mobile devices. This can be a severely limiting factor is the application is designed from the glass in words in other words consideration of the backend business processes must be taken into consideration along with considerations of usability at the screen level for the mobile device of choice.
The ability to orchestrate and compose APIs in order to provide a mediated or orchestrated experience of the business process is a critical success factor with the maturation and increased adoption of APIs within your ecosystem.
Modified on by Ali_Arsanjani 120000D8QB
|
Context: Industry adoption of development, life-cycle and architectural styles such as SOA tend to be imported into organizations and then move from an evangelist, grass roots kind of effort into a next phase of organizational maturity which tends to impose rigid constraints on the life-cycle, standards, governance and development of the architectural style. SOA is an example.
Problem(s): High level , holistic design and architectural work is needed to provide contractual visibility to clients. Architectural styles (OO, CBD, SOA etc) are co-mingled with the rigor of the top-down governance that is seen to be required, or they are sacrificed for rapid, quick and dirty, "let's not look around the bend until we get to it" approach.
Forces: The developers tend to push back to the imposition of increasing accountability and governance in general, including standardized development practices. The organization , in an effort to be accountable to the business, requires increased tracking, metrics and accountability. Developers will continue to resist increased accountability due to the perceived imposition on their time that is deemed less valuable than the actual development of the product. Against this, accountability and project visibility, allowing projections in order to fulfill "promises" (e.g., contracts, SLAs, etc) with clients (whether internal i.em business as a client, or external, as in other organizations that services/products are provided to).
(Re)solution: Combine a light top down governance and standardized software development process/method that looks holistically but does not require detail to the n-th degree prior to engaging in a project or program, coupled with agile iterative sprints.
Consequences: High level visibility and accountability are attained with some compromise with the "let's just do it" approach. Development is not impeded with heavyweight constraints. Promises to the business or external clients have a reduced risk of non-completion associated with them.
Modified on by Ali_Arsanjani 120000D8QB
|
SOA , as an architectural style and a discipline of software engineering brings business and IT to the same table by providing a portfolio of business intended services/capabilities as the common terminology that he business requires and IT will design, implement and maintain. SOA is focused on establishing a center of operations for design around a portfolio of services, which equate to a set of business capabilities, that the business looks to deliver. Integration has become a strong focus for SOA as well. There are various types of services, integration and business services to name just two categories.
In the traditional Application Programming Interfaces or APIs were merely a library of interfaces that components can access and communicate with each other. In an evolved definition that is becoming more and more prevalent, the notion of a RESTful API that is exposed outside an organization, that may expose a set of services in the backend is being promoted more widely, especially with the proliferation of interest in HTTP-based simple interfaces that can mobile enable an enterprise's subset of back end functionality that it chooses to expose.
Cloud services are services that are exposed externally, and can be consumed by all, often implicitly assumed to be outside an organization's boundaries just as an API or more precisely, a RESTful API. Cloud services may span the gamut of Infrastructure as a service (IaaS) all the way to Software as a service (SaaS) which may expose very large grain applications or smaller levels of granularity akin to the RESTful APIs we discussed above.
More detail on this later.
|