The architect’s dilemma; high, low, or no-code?

Photo: Craiyon

For decades IT architects and developers have dreamed of building fully configurable software that does not require coding. As far back as the 1980s, we heard and dreamt about fifth-generation programming languages and model-based approaches to programming.

The recent arrival of concepts like Composable Business (See bit.ly/3GoJ0O8), which emphasizes the use of modular and configurable blocks to implement business support functions using multiple building blocks only accelerated these trends. Also new practices and patterns like automation, cloud, DevOps, agile development, advanced IDEs (Integrated Development Environments), and last but not least, generative AI (ChatGPT and similar) made coding much faster and more optimal than it used to be just decade ago. 

Although the development of no-code or low-code platforms has only accelerated since then, the high-code approach has also been optimized simultaneously. Low-code and no-code still have a way to go to replace the high-code completely. The question remains, though, what to choose, the classical high-code approach, low-code, or no-code approach? Which is better? As usual, the answer is not trivial. It all depends.

Let’s start clarifying what we mean by low-code/no-code and high-code approaches:

  • Low-code (no-code) refers to software development platforms allowing users to create applications with minimal or no coding. These platforms often have a visual interface that enables users to build applications by dragging and dropping pre-built components or using predefined templates. Low-code platforms are designed to make it easy for non-technical users or business analysts to create applications without requiring specialized programming knowledge. Examples include Service Now, Microsoft Dynamics 365, Salesforce, Microsoft Power Platform, Appian, and similar solutions.
  • On the other hand, high code refers to traditional software development approaches that require extensive coding in a programming language. High code development typically involves writing code from scratch and using libraries and frameworks to build applications. It requires a strong understanding of programming concepts and higher technical expertise. Examples are all major programming languages and development stacks and tools: Python, Java, C#, C++, Intellij, Eclipse, and thousands of associated tools for development and testing.

As we experience each aspect of ​​human activity, development in IT is also a cyclical process. It happens now and then a reevaluation – techniques and practices considered a bleeding edge and modern must give way to procedures and practices once regarded as outdated. The configurable off-the-shelf solutions, either as best-of-suite or best-of-breed solutions, were an obvious no-code/low-code choice just a decade ago. However, a lot has changed around ​​building and implementing IT solutions in the last decade, and the high-code approach has also been improved.

These improvements – such as Agile, CI / CD (continuous integration and continuous deployment), containerization, cloud, IaC (Infrastructure as Code), and DevOps – reduce unit costs of IT systems development and increase fault tolerance. We can now create high-quality software in small increments, often providing measurable business value, and we can withdraw a change with errors at no cost. Modern hyper-shortening business cycles prefer fast point solutions, and all modern IT engineering is suited to such activities – from the cloud, and manufacturing processes, to architecture.

For us architects, this is a world with increasing challenges, but it is often more profitable to make custom software than to implement ready-made “combines.” It is often cheaper to change software than to buy a configurable solution and change the configuration of such a solution.

The no-code/low-code platform economy is absolute. Each configuration option adds an extra dimension of complexity to the solution that must be paid for today, no matter whether we will ever use it. Compared to the tailored solutions created based on actual needs and requirements, there are several economically unnecessary and meaningless functions in a configurable system. Maintaining it also means an additional dimension of complexity in every analysis, test, and each implementation. 

Is there, therefore, any point in using no-code/low-code with all these drawbacks?

Low-code and no-code cannot replace coding completely. However, there are still several cases where no-code/low-code platforms are the best fit. That includes:

  • Non-technical users or business analysts can use low-code platforms to build applications that automate business processes or improve workflows. This concept is known as citizen development, and it can help organizations quickly respond to changing business needs without relying on IT teams.
  • We can use Low-code platforms to build simple applications quickly, which can be helpful in situations where there is a need for rapid application development, such as in startup environments or for small businesses (rapid application development).
  • Low-code platforms allow users to quickly build and test prototypes of an application without the need for extensive coding. Such low-code prototyping can help evaluate an idea’s feasibility or gather feedback from stakeholders.

Low-code and no-code are particularly well-suited for implementing standard functionality which is not specific to a particular business domain. Implementing systems and platforms for support functions like sales/CRM (Customer Relation Management), HR (Human Resources), logistics, IT support, and IT infrastructure makes sense. These areas often require a low level of customization. All kinds of enterprise systems like CRM, ERP (Enterprise Resource Planning), ITSM (IT Service Management), integration platforms, and basic infrastructure are often where low-code and no-code can be optimal. The same applies to systems and functions, which are well standardized and used by several actors in the same industry, e.g.:

  • OSS (Operation Support Systems) 
  • BSS (Business Support Systems) solutions
  • OT (Operation Technology) systems for energy actors, travel planner systems for mobility actors
  • and so on.

On the other hand, the functionality specific to your business and where there are no standard solutions are often targets for tailored high-code implementation. Depending on the complexity and scale of the project, low-code and no-code platforms may not be suitable for large-scale or performance-critical applications, as they may not be able to handle the volume of data or processing requirements. High-code approaches may be ideal for these projects, as they allow for more flexibility and control in the development process.

Low-code and no-code platforms may be suitable for organizations that do not have access to or cannot afford specialized programming resources, as they allow non-technical users to create applications without coding knowledge. On the other hand, high-code approaches may require a more significant investment in training and development resources, as they require a strong understanding of programming concepts.

Low-code and no-code platforms may also offer limited customization options and may need to be able to integrate with other systems or technologies as seamlessly as high-code approaches. This issue may be a significant drawback for organizations that need to integrate their applications with other systems or have specific customization requirements.

Finally, low-code and no-code platforms may struggle to keep up with the rapid technological change and may become outdated or unsupported. High-code approaches may offer more flexibility and adaptability, allowing developers to customize and update their solutions precisely as required.

To summarize. The choice between low-code/no-code and high-code approaches will depend on the specific needs and resources of the organization, as well as the complexity and scale of the project. While low-code and no-code platforms may be suitable for prototyping and testing ideas, creating simple applications quickly, or for non-technical users, they may be less useful for more complex or customized projects that require specialized programming skills. On the other hand, high-code approaches may be ideal for these projects, as they offer more flexibility and control in the development process. However, high-code methods may require a more significant investment in training and development resources. They may not be as suitable for organizations that do not have access to specialized programming resources. There is no single answer to the question of what to choose. As usual, it all depends. However, in any organization and most cases, we should see both low-code/no-code and high-code solutions simultaneously. Many organizations can reduce the use of high-code/tailored solutions to the absolute minimum. We should, of course, attempt to minimize the use of high-code to zero since that is the dream, but that should never be a goal by itself.

Views expressed are my own.

The rise of agile integration architecture – from centralized SOA/ESB to distributed autonomous polyglot integration architecture

Since over a decade the omnipresent SOA architecture and ESBs were considered a state of the art when it comes to integration architecture. Still there are lots of organizations where ESBs is in use. If you still have an ESB as a main hub of your integration stack, it is probably time to start considering som newer options. The world moved on and “agile” has also reached the integration architecture.

But before we look at what agile integration is, we need to take a broader look on the integration architecture. An example of that could be the the reference architecture model shown below (Based on IBM Think 2018 presentation: http://ibm.biz/HybridIntRefArch).

Reference architecture for hybrid integrations

The integration architecture patterns ar often divided into three main categories: synchronous, asynchronous and batch integrations. Synchronous integrations are often implemented as http/https or ReST interfaces, asynchronous integrations are mostly different kinds of pub-sub or streaming integrations and finally batch are often referenced to as ETL (Extract Transform Load) or more recent ELT (Extract Load Transform) and are very commonly used in connection with data warehouses, various dataplatforms and data lakes.

With the on march of the cloud technologies, the integration architecture has also more an more adopted cloud as the execution environment and there seem to emerge two main streams on how the integrations are being implemented in the cloud: either as the native PaaS or the “best of suite” iPaaS /iSaaS type of plattform.

The native PasS basically uses the very basic components in one or more of the major PaaS plattforms (AWS, Azure, Google Cloud) Here we talk about components like f.eks. AWS API GW, AWS Kinesis, AWS SNS/SQS, AWS Step Functions, Azure API Manager, Azure ESB, Azure Logic App and so on. The “best of suite” iPaaS/iSaaS is basically a complete integration suite implemented as a SaaS service e.g. like Dell Boomi, Informatica or MuleSoft which often provide a set of adapters for different protocols.

The integration architecture has also evolved over last decade from the infamous centralized SOA architecture and ESB to a more distributed architecture. This evolution has happened and affected three different axises: people, architecture and technology.

In the architecture and technology axis, as the development becomes more and more autonomous, with cloud services, big data and micro service oriented architecture as well as the new ways of running software natively in cloud or in containers, also the integration architecture developed into a more distributed variant. The centralized ESB like plattforms disappear, the integration became either of the point to point type for synchronous integrations or pub-sub and high performance streaming for asynchronous integrations. The integration software itself became more distributed and in some cases also run either in containers or natively in cloud.
Finally as the integration is more distributed and often developed by separate autonomous teams, it is also natural that different integrations are implemented using different technologies and programming language or become what we call polyglot integrations.

Another consequence of this evolution in the integration architecture are changes affecting the people axis. With autonomous teams and distributed integrations there is no longer need for centralized integration teams and the integration resources are now spread over different teams. This means as well that the integration architecture becomes more of a an abstract aspect that has to be taken care of in the organization, often without resources that are explicitly allocated for this task and often without clear ownership. This trend basically follows the same pattern as for the other dimensions of the enterprise architecture including security and information architecture.

The integration architecture also follows another important trend, called domain driven architecture (DDD). DDD is another force that pushes integration architecture from centralized and layer oriented architecture into a more distributed architecture with more tight integrations inside each domain and more loosely coupled integrations with other domains and external services. This makes it possible to reduce complexity of long technical value-chains with unnecessary transformations, increases the ownership of integration artifacts as well reduces the amount of overlapping data that pops up everywhere. Here is an example of Domain Centric Integration Architecture at DNB (presented at IBM Think Summit Oslo 2019)

Data Centric Integration Architecture *DNB – IBM Think Summit Oslo 2019)

Process orientation is another important aspect in particular when looking at the digitalization as the process improvements and optimization are possibly the most important areas for driving any business to be more digital. Also integrations need therefore to become more process driven instead of being only technology driven. However the traditional, centralized integration plattforms give little space for adjustments and adaptation to better facilitate the changes in processes and make it therefore difficult to tailor the integrations to fit the improvements in the processes. As the choice of platform is often purely technology driven, once the plattform is selected and implemented it is usually hard to adapt to the actual process. If lucky, the chances are that you have wide enough range of adapters and tools to fit your needs, but there is no guarantee to that.
Cloud based “À la carte” integration platform, where one can pick the most suitable integration components and only pay for the components in use and for the time they are in use, are therefore more suited for process driven integration approach.

The critics would however point out that with the rise of the modern, distributed, autonomous and polyglot integration plattforms we lost some of the important capabilities that e.g. SOA and ESB provided. The integrations are becoming more point-to-point and with that adding more complexity and increase the “spaghetti factor”. There is no longer one place, one system, which hides the complexity and where you can look and see how your portfolio is integrated and see all dependencies. In practice this is not such a big issue and can be solved by either documentation, reverse engineering or self-discovery mechanisms and there are several tools that make this task easier. The point-to-point challenge can also be alleviated e.g by using data lakes and data streaming mechanisms that reduce the need for direct point-to-point integrations, just to mention Sesam (https://sesam.io/) or Kafka (https://www.confluent.io/)

On the other hand one could point out that the new plattforms no longer support several aspects of the traditional ESB VETRO pattern which stands for Validate, Enrich, Transform, Route and Operate (https://www.oreilly.com/library/view/enterprise-service-bus/0596006756/ch11.html)
This is somewhat correct, however with distributed, conteinarized and polyglot integrations it is relatively easy to implement all necessary validations, enrichments and transformations. When it comes to routing, there are several components which can provide similar functionality in Azure (APIM) or AWS (API GW) and also the Operate aspect is more of a task of the autonomous DevOps team that operates the service with its integrations.

Summarizing, the integration architecture has undergone massive changes in several dimensions and evolved from the centralized SOA/ESB platform into a more distributed, autonomous and polyglot architecture. This development has been catalyzed by underlying trends in IT development and architecture, in particular, DevOps and autonomous teams, digitalization and process orientation, cloud, microservices and containerization of the architecture. The result is the integration architecture, which is more flexible and more adaptable both when it comes to the business needs, but also needs of the development organization itself and finally the rise of what we call Agile Integration architecture.

This work excluding photos and pictures is licensed under a Creative Commons Attribution 4.0 International License.

Capitalizing the value of your data, get your basic in place first!

It has been a while since The Economist proclaimed that “data is the new oil” following the tremendous surge of profits of FAMGA – Facebook, Apple, Google, Microsoft and Amazon. Businesses in all kinds of industries from utilities to retail, followed and embarked on this new trend and started hoarding vast amounts of data, strengthening their analytical teams and looking for use cases that make it possible to extract value from data. As it turns out however this isn’t an easy task especially for not typical IT companies.

Photo: Shutterstock.com

It does not take a long time to realize that the insights are never better than the underlying data. It is slowly becoming obvious how crucial it is to have in place sufficient control over data quality and information governance.

But first thing first – before you can improve the data quality, you need to understand what data quality means. Data quality isn’t just a single dimensional feature. It is a broad term and often described by a number of dimensions, see e.g. 6 dimensions or data quality worksheet:

  • completeness – data must be as completely as possible (close to 100%)
  • consistency/integrity – there should be no differences in the dataset when comparing two different represantations of the same object
  • uniqueness – avoiding duplication of data
  • timeliness – whether information is available when it is expected and needed
  • validity/conformity – data are valid if it conforms to the syntax (format, type, range) of its definition
  • accuracy – how well the data set represents the real world
  • traceability – is it possible to track the data origin and its changes

You will need to work with all of these dimensions. It isn’t enough to improve the completeness of the data if the data does not conform to the expected format.

Moreover, there are some profound implications to your organization. Although the quality of data is something that the whole organization should focus on it is natural to focus on teams that actively use the data and are dependent on the quality and suffer most. This usually includes the customer-facing channels, user-facing interfaces or data-warehouse teams that are the first to observe and detect issues related to the data quality. This is often the case when the information governance framework is missing or it is poorly implemented.

Consequently, the information governance framework becomes crucial to ensure sufficient control over data and data quality. Such framework includes both a set of principles, i.e. information governance principles, which are established and supported by the organization as well as new roles to enforce it. Moreover, there is often a need to establish or strengthen the data culture, focus on data quality and a right mindset to both ensure that the quality issues are corrected at the origin and not where they manifest themselves.

Photo: Shutterstock.com

The information governance organization itself can operate under a number of different models, i.e.:

  • IT driven
    IT takes care of everything, storage, processing and processes that secure high quality, structuring and catalogization of data
  • business driven
    IT only provides the storage infrastructure, business is in charge of processes to secure data quality
  • hybrid model
    IT driven in some domains, business driver where it makes most sense, probably the most pragmatic approach

The process of improving your information architecture and information governance framework isn’t so complicated, but it requires some effort and a huge amount of patience as it is primarily an organization and culture change.
In order to improve the information governance framework and as a result improve data quality, you will need to get through at least the following steps:

  1. Get an overview of the information architecture and create/improve data models
    You need to know the current state of the union when it comes to what are the most central information entities, how the information is modeled, used and transferred between different parts of your organisation.
  2. Get an overview over pain points in data quality
    You need to know the actual data related issues that your organisation currently experiences. Without proper insights you are unable to improve the data quality. You need to talk to the business, talk to people around to get enough insights and understanding of most critical data related issues they deal with.
  3. Create an initial set of governance principles
    Establish the initial governance framework, first of all by creating and describing a set of principles for Information Architecture, Enterprise Information Architecture as well as principles for data analytics and advanced analytics. Get sufficient backing in the organization.
  4. Adjust the organization, create new roles and responsibilities including roles like information owners, information stewards, data stewards, data scientists, and other roles (see e.g. IBM Redbook, IA governance)
  5. Finally, consider and introduce new tools and technologies for managing the information
    Depending on the results of previous steps and needs of your organisation you may need to consider new tools for better control of your master and reference data. The most obvious one is a Master Data Management system. A Master Data Management system makes it possible to reduce manual operation on master data, coordinate master data between different systems and keep it aligned as well as detect any deviation from the data model.

Although it is very tempting to jump on and start implementing new, exciting use cases for AI/Machine Learning, the actual value of this technology is completely dependent on the underlying data quality and other aspects of information architecture. Data quality and proper information governance are crucial, basic aspects. Without them the vast amounts of data that you spend lots of effort gathering becomes not oil, but garbage with little value.

ArchiMate 3.0 – a modern modeling language for digital age

A great deal of IT Architects aren’t big fans of modeling languages, modell driven development and modeling tools. MS PowerPoint, Visio or other drawing tools are far too often used as a surrogate for a more structured approach. However communicating ideas clearly is crucial for an IT architect and not everything can be easily explained with words, and PowerPoint drawings are often too ambiguous in expression. Creating comprehensive diagrams and modells that clearly express the ideas is still crucial for IT Architects and developers to be able to communicate the ideas both in within the development teams. It is also crucial for efficient communication with other parties including the business stakeholders.

Photo: Shutterstock.com

For the Enterprise Architects over long time there was no good alternative to UML. UML is good for low level software modeling in particular application architecture. It is far less useful when communicating with business. There existed BPMN, but it was mainly covering process related modeling and not covering all the needs related to modeling strategy, tactics or even the business processes.

This was the situation until the arrival of ArchiMate in 2009. Based on IEEE 1471, developed by ABN AMRO and introduced by The Open Group The Open Group. Archimate defines three main layers: Business, Application, and Technology:

  • Business layer describes business processes, services, functions, and events. It describes the products and services offered to the external customers
  • Application layer describes application services and components
  • Technology layer describes hardware, communication infrastructure, and system software

Those three layers provide a structured way of bridging the different perspectives from business to technology and infrastructure.

However, the full model of ArchiMate 3.0 also brings or enhance another three very useful layers:

  • Strategy and Motivation layer – introduced in 2016 in ArchiMate 3.0 for modeling of the capabilities of an organization and help to explain the impact of changes on the business (gives a better connection between strategic and tactic planning)
  • Implementation and Migration layer – supports modeling related to project, portfolio or program management
  • Physical layer – for modeling physical assets like factories

These last three layers are crucial to properly bridge together the world of business with software and technology. In that sense, ArchiMate is bringing a new quality when it comes to modeling languages. ArchiMate 3.0 is also tightly connected and aligned with TOGAF 9.1 which makes it even more suitable as a new state of the art modeling language.

A simple example of Strategy and Motivation layer modeling

Summing up, ArchiMate 3.0 brings several new capabilities and qualities to modeling, which makes it a great tool for the digital age, where we are not only supposed to model the software and technology itself, but where it becomes increasingly important to be able to link the business models, strategy, tactics to the actual business processes and finally applications and technology.

Creative Commons License

This work excluding photos is licensed under a Creative Commons Attribution 4.0 International License.

Will changing climate, market dynamics and digitalization transform power and utilities into bleeding edge IT champions?

There are few more traditional industries than power and utilities, and most likely nothing more common and little engaging than electrical power, so ubiquitous that we do not even notice its existence anymore. It is like air and water, it is just there.

Electrical power has been universally available since decades and while the tech heavy telco sector is struggling retaining their margins, fighting the inevitable commodity, dumb pipe fate and is gradually forced to find new revenues streams innovation, the traditional and commodity driven power sector is forced to innovate for completely different reasons. The result is probably the biggest technology shift since Nikola Tesla and Edisson invented the electric current. Once traditional and archaic, the power producers, TSOs and DSOs are slowly becoming the high tech champions as they implement Smart Grids, the electrical networks of the future.

The underlying reason is really a combination of different trends. There is of course general technology development, IoT, cheaper and more available sensors which provide data not so easily available til now. It is also much easier to transfer bigger amounts of data. The steadily increasing capacity of WDM fiber technology and availability of 4G coverage make it easy to send gigabytes of data practically from anywhere. NB-IoT technology on the other hand reduces the power consumption making it possible to deploy the battery driven sensors capable of sending data over multiple years. IP technology and convergence is also simplifying the traditional technology SCADA stacks, making the sensor data more easily accessible. With more affordable storage, memory, CPU power and technologies like Hadoop, Spark and in memory databases it is now possible to store petabytes of data and analyse it efficiently both with batch processing and streaming techniques.

Photo: NicoElNino/Shutterstock.com

On the other hand there is climate change and shift to renewable energy electric cars driven by rechargeable batteries or hydrogen a well as plugin hybrids demand more electric power and increase power consumption first of all in peak hours. Wind and solar power is also very difficult to control and the changes in supply have to be quickly compensated by other energy sources like gas turbines. New AMS (Advance Metering Services) power meters provide new possibilities when it comes to more dynamic pricing of energy. It is now possible to affect the consumer behavior by changing the price and moving some of the peak load to time of the day with lower demand for energy. With smart house technology it is also soon possible to control the consumption and cut off water heaters or car chargers instantly. Moreover with use of technology it is easy for the energy providers to predict the energy price changes and gain bigger market this way, which in turn puts the pressure on the TSOs and regulators to develop much more comprehensive and real time models to control the networks (e.g. ENTSO-E Common Grid Model)

The result is that the DSOs, TSOs and producers are simply forced to transition into high tech companies. Using IoT to collect new streams of data that can then be used to better predict the remaining lifetime of the assets or schedule the repair and maintenance more precisely. Using Big Data analytics to predict the faults before they occur and employing machine learning to analyse these huge quantities of data. All of this requires huge amounts of CPU power as well as flexibility and scalability thus pushing the energy sector into use of cloud, BigData (Spark and Hadoop) and other more traditional ways of handling and analysing huge amounts of data like OsiSoft PI. Moreover RDF stores and triple stores is another technology which is getting increasingly important for modeling the networks, analyzing, predicting and planning capacity allocation and managing congestions.

All of this is happening as we speak, take example of FINGRID and their newly completed ELVIS project, or look at the ENTSO-E Common Grid project, Statnett SAMBA project which aims optimizing the asset maintenance as well as AutoDIG which automates fault analysis and condition monitoring. Also Dutch Alliander is known for heavy and successful use of Advanced Analytics.

The last question still remains, is this just a short lasting phenomena or a long term trend and will these trends be enough to transform the power & utilities.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Seven reasons for collaboration failure in offshoring projects

Although more and more western and in particular Scandinavian companies choose the digitalization and automation as the path to make their business more efficient and competitive, there are still quite a few of those that see offshoring as the best way to reduce their cost base and stay ahead of the competition.  Collaboration pitfalls and barriers in offshoring projects are numerous. Here we explain a few of them based on observations made in multiple offshoring projects primarily between Scandinavia and South Asia.

Photo: Rafal Olechowski/Shutterstock.com

Differences in management models

The management models between the Scandinavian and asian cultures are very different. Scandinavian models are based mostly on consensus and put lots of emphasis on collaborative decision making. In Asia and in particular India the management model is very authoritative and hierarchical and the workers are normally not involved in decision making at all. The management models are inherently incompatible and are the root cause of many collaboration failures.

Work-life balance

In addition there are also several differences between Asian and Scandinavian workers when it comes to work-life balance. Asian workers tend to work much more, exceeding by far the number working hours usual for Scandinavian workers. This creates some pressure and tension in offshoring projects for instance as onshore Scandinavian workers would receive questions after working hours from their offshore peers and would feel stressed to respond straight away.

High turnover is also having certain effect on the work environment. Asian and in particular Indian workers are changing jobs every few months while in Scandinavia it is common to work several years or decades for the same company. In addition, Asian workers are expected to take care of their families and are often taking extended leaves due to family issues. Scandinavians on the other hand are often very loyal to their employer as well as they can rely more on the social welfare system for taking the care of their families.

High turnover and absence also affects collaboration in a negative direction as it makes it harder to create good relationships at work, reduce the trust, respect and it causes substantial overhead and waste due to frequent on-boarding and knowledge transfers.

Finally, the time difference between Scandinavia and South Asia is also a well-know factor, although it plays smaller role since the time difference is also limited (i.e. 4.5 hours between Norway and India). The offshore partners tend to internalize it and compensate by simply starting working later.

Tools and infrastructure

Efficient tools and infrastructure are often considered as a basic prerequisite, which we are not even conscious of anymore. Communication issues in offshoring projects are unfortunately still very common. In particular one could mention poor telephony and internet lines as well as poor videoconferencing facilities. These kind of issues make the collaboration very hard. Still there are many businesses that struggle with this kind of issues. In an offshoring project this becomes a top important prerequisite. The off-shore and on-shore teams are so dependent on the infrastructure that it simply has to work as efficiently as possible.

Transfer and search barriers

Morten T. Hansen in his book “Collaboration” defines several collaboration barriers, in particular search and transfer barriers. The search barriers are related to not being able to find what you look for in the organization while the transfer barriers are related to not being able to work with people you do not know well.

Transfer barriers in offshoring projects are mainly caused by knowledge transfer phase which is too short combined with very high resource turnover. Offshore workers are often simply unable to acquire enough knowledge and understanding of the subject matter in the short time that the knowledge transfer is allocated to, as well as the knowledge quickly evaporates as a consequence of high turnover. Search barriers on the other hand occur mostly due to insufficient understanding of the onshore organization. Here the turnover and knowledge transfer also plays an important role. Poor understanding of the organization leads to inefficient communication and delays in involving the right people at both ends.

Collocation

Collocation of the workers in the same office space can remedy some of the drawbacks related to the distance. However even in this case one may end up with different subcultures and groups. Although people are collocated and sit close to each other at the same location they still may tend to speak their native language instead of english. Instead of helping the communication the collocation the two groups will simply end up disturbing each other.

Lack of diversity

Scandinavian high tech workplaces are often very homogeneous and dominated by natives. It is also quite common with high expectations when it comes to use of native language although there are virtually no Scandinavian high tech workers who aren’t incredibly fluent in English. This may contribute to building a work environment which is little open for non-native speakers, does not acknowledge anything else and does not provide a good basis for efficient collaboration between offshore and onshore teams.

Cultural differences

At last also the culture is often a major obstacle to efficient collaboration. Cultural differences make it very hard to communicate due to differences e.g. in non-verbal communication. In particular the nodding for yes and no may be completely different in Indian culture than in Scandinavia. Another issue is related to “try and fail” approach which is relatively common way of finding solutions in South Asia. Scandinavians on the other hand take much more rational approach and require more evidence and data before even starting looking at a certain problem or task.

Moreover Scandinavians are simply more cautious and reserved when assessing and reporting the progress, while Asian contractors often may be tempted to provide better reports than the reality as they fear the consequences of negative reports from their own management.

Finally the offshore workers may show difficulties thinking independently enough and making the decisions on their own as they are constrained by their own hierarchy and management. In Scandinavia lack of independent thinking could be regarded as a insufficient competence and creativity which in turn contributes to reducing the trust and respect and again affects the collaboration negatively.

This were a few examples of reasons why collaboration in offshoring projects may be challenging and even fail. In our next article we will look into how to address these issues and improve them.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Let it crash

As functional programming paradigm becomes more and more broadly recognized,interest in functional languages (Scala, F#, Erlang, Elixir, Haskell, Clojure, Mathematica and many other) increases rapidly over last few years, it still remains far from the position that mainstream languages like Java and .NET have. Functional languages are predominantly declarative and based on principles of avoiding changing state and eliminating side effects. Several of these languages and frameworks like Scala/Akka and Erlang/OTP also provide new approach to handling the concurrency avoiding shared state and promoting messaging/events as a mean for communication and coordination between the processes. As a consequence they also provide frameworks based on actors and lightweight processes.

Fail-fast, on the other hand, as an important system design paradigm helps avoiding flawed processing in mission critical systems. Fail-fast makes it easier to find the root cause of the failure, but also requires that the system is built in a fault-tolerant way and is able to automatically recover from the failure.

Fail-fast combined with lightweight processes brings us to “Let it crash” paradigm. “Let it crash” takes fail-fast paradigm even further. The “Let it crash” system is not only build to detect and handle errors and exceptions early but also with an assumption that only the main flow of the processing is the one which really counts and the only one that should be implemented and handled. There is little purpose in programming in a defensive way, i.e. by attempting to identify all possible fault scenarios upfront. As a programmer, you now only need to focus on the most probable scenarios and the most likely exceptional flows. Any other hypothetical flows are not worth to spend time on and should lead to crash and recovery instead. “Let it crash” focuses on the functionality first and this way supports very well modern Lean Development and Agile Development paradigms.

As Joe Armstrong states in his Ph.D. thesis, if you canʼt do what you want to do, die and you should not program defensively, thus program offensively and “Let it crash“ Instead of trying focusing on covering all possible fault scenarios – just “Let it crash“

Photo: Pexels

However, recovery from a fault always takes some time (i.e. seconds or even minutes). Not all kinds of languages and systems are designed to handle this kind of behavior. In particular “Let it crash” is hard to achieve in C++ or Java. The recovery needs to be fast and unnoticed for the processes which are not directly involved in it. This is where functional languages and actor frameworks come into the picture. Languages like Scala/Akka or Erlang/OTP promote actor framework, making it possible to handle many thousands of processes on a single machine as opposed to hundreds of OS processes. Thousands of lightweight processes make it possible to isolate processing related to a single user of the system or a subscriber. It is thus cheaper to let the process crash, it recovers faster as well.

“Let it crash” is also naturally easier to implement in an untyped language (e.g. Erlang). The main reason for this is error handling and how hard it is to redesign the handling of exceptions once it is implemented. Typed languages can be quite constraining when combined with “Let it crash” paradigm. In particular, it is rather hard to change an unchecked exception into checked exception and vice versa once you designed your java class.

Finally “Let it crash” also implies that there exists a sufficient framework for recovery. In particular, Erlang and OTP (Open Telecommunications Platform) provides a concept of supervisor and various recovery scenarios of the recovery of whole process trees. This kind of framework makes implementing the “Let it crash” much simpler by providing a foolproof, out of the box recovery scheme for your system.

There are also other benefits of “Let it crash” approach. As there are now each end-user of your system, and each subscriber is represented as a single process, you can easily take into use advanced models like e.g. finite state machines. Even though not specific to Erlang or Scala, the finite state machines are quite useful to understand what has lead to the failure once your system fails. Finite state machines combined with a “Let it crash” frameworks can potentially be very efficient in for fault analysis and fault correction.

Although very powerful and sophisticated, “Let it crash” did unfortunately not yet gain much attention besides when combined with Scala/Akka and Erlang/OTP. The reasons are many, on one side (as explained above) the very specific and tough requirements on the programming languages and platforms but also the very fact that only the mission-critical systems really require this level of fault tolerance. In the case of classic, less critical business systems, the fault tolerance requirements are not significant enough to justify the use of a niche technology like Erlang or Scala/Akka.

“Perfect is the enemy of good” and mainstream languages like Java or .NET win the game again, even though they are inferior when it comes to fault-tolerance and supporting “Let it crash” approach.

Creative Commons License

This work excluding photos is licensed under a Creative Commons Attribution 4.0 International License.

A successful dumb pipe strategy – learning from Iliad

Faced with the inevitable dumb pipe fate and commoditization of the telco business the last couple of years, several large telco operators started pursuing vertical and horizontal integration strategies. Horizontal, by expanding into other markets or consolidating. Vertical, by expanding either into downstream side of their value chain or into other verticals, in this way reinventing their business.
On the other hand, many newcomers often don’t fear becoming a dumb pipe as much and take slightly different pathway by optimizing their cost base, challenging established operators and this way fighting for their market shares.

Photo: Pexels.com

France’s Iliad with their Free/Alice brands is currently the 2nd largest broadband operator and 3rd mobile operator in France challenging the old incumbent operator Orange (France Telecom). Both Iliad and Orange provide a wide range of bundled services to their customers (Triple/Quadruple Play) ranging from FTTH, VDSL, ADSL, landline and mobile telephony to TV. The other two major competitors like SFR and Bouygues also use similar bundling strategy to provide full Quadruple Play spectrum of services resulting in a lower total cost to their customers as well as increased ARPU for the operator.

Iliad’s network investment strategy has been initially focused on broadband Internet access services (ADSL and later FTTH) and their main revenues originate from Fiber/ADSL. Iliad however also provides other services primarily TV, VoD most of it bundled and free of additional charge to the subscribers. In 2012 strengthen Iliad further their market share by acquiring from a small ADSL operator from Telecom Italia – Alice and this year trying to acquire assets from CK Hutchinson and VimpelCom to become Italy’s fourth mobile operator.

The strategy that gave Iliad foothold in the telecommunication market was originally to use unbundled ADSL access from incumbent France Telecom. By doing that Iliad could achieve much higher margins than their competitors like Alice who did not have the same focus on unbundled ADSL and used much more expensive not unbundled ADSL access.

High prices and high margins on ADSL made it possible for Iliad to provide their bundled VoIP service practically for free (free local, long distance) as well as to provide hundreds free TV over ADSL channels. The only services that Iliad charged customers for in addition to ADSL were basically certain international calls, paid channels, VoD over ADSL as well as subscription VoD. Consecutively with their low cost structure Iliad was able to provide the most affordable basic ADSL-VoIP-TV bundle on the French market without compromising the performance of their ADSL. This combined with their comprehensive offer on the content side was basically what made Iliad so successful in tough French telco market.

Further when looking at Iliad’s fiber investments, by using sewers Iliad managed to bring the cost of fiber in Paris to a much lower level even comparable with ADSL. Iliad’s investment in FTTH is important for the future both to overcome the limitations of VDSL/ADSL as well as due to that the competitors have FTTH and FTTN/VDSL in their product portfolios. By doing FTTH investments at much lower cost than the competition and reusing the sewers Iliad gained a competitive advantage which neither Orange, SFR or Bouygues had. This is in a way an extension of their ADSL strategy where they also concentrated primarily on bringing the ADSL costs down and not just on achieving the economies of scope. Iliad has further used a similar strategy for expanding into the mobile market.

Summing up, by doing large infrastructure investments at lower costs than competition Iliad hopes to beat the competition on price/performance and gain larger market share. Lower cost and higher margins compared to the competition make it possible for them to bundle some basic VoIP and TV services at no additional charge. Iliad’s hope is that by doing this they can earn money later on advanced services.

Although very successful so far – Iliad’s low cost/dumb pipe strategy has also some weaknesses, e.g. lack of control over the content that might lead to their strategy falling apart in face of a strongly vertically integrated competitor. Without creating a compelling value proposition for the subscribers e.g. in form of an ecosystem and getting more control over the content Iliad is bound in long run to compete basically on price. The strategy of bundling more and more services at a lower price than the competition is a way of staying one step ahead of the competition. However it may not be sufficient without having more control over the content. On the other hand, there is a hope, since new technologies like IoT are expected to provide some new growth opportunities also for dumb pipe operators.

Creative Commons License

This work excluding photos is licensed under a Creative Commons Attribution 4.0 International License.

Big Data in the cloud – avoiding cloud lock-in

In our previous article we looked at different approaches to introducing Big Data technology in your business – either as a generic or specific solution deployed on premise or in cloud. Cloud gives obviously very good flexibility when it comes to experimenting in early stages when you need quick iterations of trying and failing before you find the use case and solution, which fits your business needs best.

Photo:Pexels

Cloud lock in

However in your try and fail iterations you will need to focus to not to fall in another pitfall – the cloud vendor lock-in or simply cloud lock-in. By cloud lock-in we mean using  vendor specific implementations, which only a particular cloud supplier provides. A good example here could be Amazon Kinesis or Google Big Query. Using this specialized functionality may seem to be a quick way of implementing and delivering your business value however if your cloud provider chooses to phase out support for that functionality your business may be forced to reimplement parts or whole of the system that depends on it. A good strategy against lock-in is particularly important for established businesses although while for startups with relatively thin software stack this isn’t such a big deal since the switching costs are usually stil low.

Open source to the rescue

Open source software has a great track of providing a good solutions to reduce vendor lock-in. It has helped fighting vendor lock-in for decades. In particular within operating systems Linux has played an important role in fighting the vendor locking. Taking this into the Big Data world it does not take long time to understand that automation and in particular open source automation tools play important role in avoiding cloud lock-in. This could for instance be achieved by deploying and running the same complete Big Data stack on-premise and in the cloud.
Using automation tools, like Chef, Puppet, Ansible Tower is one of the strategies to avoid vendor locking and quickly move between the cloud providers. Also container technologies like Docker or OpenShift Containers make it possible to deploy the same Big Data stack, either it is Hortonworks, Coouder or MapR across different cloud providers, making it easier to swap or even use multiple cloud setups in parallel to diversify the operation risks.

What about Open Source lock-in?

Listening to Paul Cormier at RedHat Forum 2016 (Oslo) last week one quickly could get an impression that the cloud lock-in can simply be avoided by promoting Open Source tools like Ansible Tower or OpenShift Containers. These solutions effectively help turning the IaaS and PaaS resources offered by the Big Three (Amazon, Google and Microsoft) as well as other cloud providers into a commodity. On the other hand critics of Open Source could say that by using this kind of solution you actually get into another kind of lock-in. However the immense success of Open Source software over last 15 years shows that lock-in in case of an Open Source system is at most hypothetical. It is easy to find a similar alternative or in absolutely worst case scenario to maintain the software yourself. Open Source by its very nature of being open brings down any barriers for competitive advantage and the new ideas and features can easily be copied by anyone, anywhere and almost at no time.
Creative Commons License

This work excluding photos is licensed under a Creative Commons Attribution 4.0 International License.

Big Data solution – generic or specific, cloud or on-premise?

As Big Data becomes more and more popular, and more and more options become available selecting Big Data technology for your business can become a real headache. Number of options of different stacks and tools is huge ranging from pure Hadoop and Hortonworks to more proprietary solutions from Microsoft, IBM or Google. If this wasn’t enough you will need to choose between on premise installation and cloud solution. Number of proprietary solutions also increases at a huge rate.  Here we sum up a few strategies to introduce Big Data in your business.

One of the first questions you will meet when looking into possibilities of using Big Data for your business is if you should build a generic platform or a solution for specific needs.

Photo: Vasin Lee/Shutterstock.com

Building for specific needs

In many businesses, if you follow internal processes and project frameworks you will intuitively ask yourself what purpose or use case you want to support using Big Data technology. This approach may seem to be correct, but unfortunately, there is number of pitfalls here.

First of all, by only building a platform for specific needs and specific use cases, you will most likely choose a very limited product, which only mimics some of the features of a full-blown implementation. Examples here might be classical, old-fashioned analytical platforms like e.g. a Data Warehouse, statistical tools or even a plain old relational database. This will be sufficient for implementing your use case but as soon as you try to reuse it for another use case, you will realize the limitations. In particular the fact that you need to decide the structure of the stored data before you start collecting it, you need to transform it to adapt it to the new use case and face issues with scale-up every time the data volume increase and your Data Warehouse or relational database is unable to keep up with the volume and velocity of the data. You will in another word largely limit your flexibility and the possibility to explore your data.

A solution implemented for specific needs is in practice not really a Big Data solution although your vendor may insist calling it Big Data, thus this is just a Small Data solution. It may still be a viable choice for your business as long as you do not have any bigger ambitions or expectations in the future. By introducing more and more solutions like this you will ultimately fragment and disperse your business data into multiple loosely connected systems. The more fragmentation there is, the more difficult it gets to analyze data across your business.

Build a generic platform

Building a generic platform is much harder, but might be the right thing to do. It requires though courage to build a solution and start collecting data often without an adequate use case, to begin with. This is often difficult to advocate for, it is a leap of faith or a bet that your business needs to take. However, if you really want to unleash the power of Big Data, this is the strategy which potentially will both give you the flexibility to explore your data and to conduct experiments and find new facts, information and ways to use it for your business. This kind of platform based on open Big Data technology like Hadoop will also be easier to scale when needed and process increasing volumes and velocity of data.

The second very basic question one will meet is where to deploy and establish your platform – Cloud or on-premise? Although this question may seem really unrelated to it is important to be aware of the implications of chosen right deployment strategy.

On-premise platform

Choosing the on-premise platform seems like a natural choice here for many established business with established, in-house IT operations. However as soon as you choose to build a generic platform you will quickly realize that you need to experiment since the number of different Big Data stacks, technologies and tools is extreme. You need to be able to quickly change from one solution to another without too much lead time and waste. It may be hard to change the platform once you have heavily invested in an expensive proprietary on-premise platform like Oracle Big Data Appliance or even IBM Big Insights. It also requires people with a rather specific skill set to maintain the platform.

Cloud platform

Cloud-based Big Data platform like Amazon EMR, Google Cloud Platform or Microsoft Azure provides necessary flexibility and agility which is crucial when starting experimenting with Big Data. If you want to focus your business on what matters the most you will concentrate on the core of your business. Setting up hardware, installing Hadoop and running the basic Big Data infrastructure is not what most businesses need to focus on and should prioritize.

The cloud platform is especially relevant in the first, exploration phase when you are still unsure what to use the technology for. After the first exploration phase, when your solution is stabilized you may still reconsider sourcing in operations BigData technologies however in most of the cases you will like to still keep the flexibility of the cloud.

Summary

All in all, the best strategy is a platform which is open and flexible enough to cover future cases, do not build your BigData solution just for current needs. This is one of the cases when you actually need to concentrate more on technology and capabilities and not only the current, short-term business needs.

Creative Commons License

This work excluding photos is licensed under a Creative Commons Attribution 4.0 International License.