Frequently Asked Questions

For Business:

1. Is the Ikon Orchestration Platform a product that we buy to manage all our IT systems and programs?

No. Ikon is not a product that you plug and play into your existing IT landscape. Ikon is a platform, a framework, a toolkit and a way of working that is quite new. Whereas there are hundreds of finished software products in the market which do a specific thing for a variety of parts of an organization, Ikon is not that. Instead, Ikon works with those products to link them, plug gaps, extend them, build and replace them as they are no longer supported, and much more. In addition, Ikon is used by more mature organizations to create automated workflows both within the organization as well as to include external sources such as customers in a supply chain or the Web.  Some might like to call us an Enterprise iPaaS + Development Platform + Orchestrator, as it is the closest they can find to understand where we are positioned.   We like to think of ourselves as technology enablers.

2. How is data orchestration different from data management?

Data orchestration has become the new cornerstone of digital transformation. What started with a move away from sequential processing in the 1980s, data management has come to encompass governance, architecture, modelling, storage, security, warehousing, quality and more. With so many areas to manage, and hundreds of data management tools to co-ordinate, it has become imperative to unify all these data management agents across one framework. An orchestration platform, or layer, allows all the players in the data management ecosystem to work together seamlessly, by conducting where, when and how they are used.

3. Is data orchestration necessary for process automation, and can the platform be used outside of my organization to bring in the data I need from external sources?

Yes. To automate a business process or workflow, it is imperative that one central platform can access each applicable source to clean, filter and extract the data required. It doesn’t matter that the data is in a different language, in a Facebook post or sitting in JD Edwards software, SAP or even a Microsoft Excel spreadsheet. A good data orchestrator like Ikon can access them all and extract the data required, without affecting the source. Of course, you will need permission from your external business partners to encourage data sharing and collaboration, but Ikon can easily reach outside of your business environment to collect data using robots called probes.

4. Does Ikon provide orchestration of all data forms? And how long does it take to retrieve and activate the data?

The Ikon platform is able to access any type of data and in real time. Structured, unstructured. Normalized, denormalized. Javascript, Scala, Python, R. With Hadoop and without. SAP, Microsoft, Oracle, ServiceNow. MySQL, MongoDB, Cassandra. RPA or API. Synchronous and Asynchronous. The list is endless.
Whatever data you need and wherever it is, Ikon can access and visualize it in real time on custom dashboards.

5. Can Ikon also push data?

Yes. Ikon has the ability to push integrated data and updates back into systems your business is running. If a workflow, pulling data from various systems, results in new insights, or predicts upcoming risk, it can easily send the new information back into any program as easily as it pulled the data in the first place, if that is required.

6. I keep hearing about flexibility and scalability around data management. How does this translate to our business needs?

Flexibility and scalability, also agility, refer to how well technology adapts with your needs and over time. The problem with older technologies is that they are hard-coded; they can not easily be changed. Think of a high-rise building that may have been designed many years ago … once it is built it is extremely difficult to reconfigure it or extend it as required.

The same can be said for technology. Older programs and systems may be patched or updated here and there, but eventually they grow too bulky and cumbersome and need to be started all over again. Technology vendors stop supporting older versions and business is faced with the expense and transition from one version to the next, or perhaps a move to a new system altogether. With Ikon, anything connected to the platform or built on it can be remodeled, extended, removed or renovated to suit conditions and in a very short time frame. Ikon is agile.

7. The Ikon platform promises speed. How is Ikon different to other solutions in this regard?

Other orchestrators that are entering the market are generally from legacy vendors with big names. These companies have been around for many years and need to protect their market leadership, but are quickly running out of time. After years of huge investments to stay relevant, these vendors continue to acquire new businesses and bolt the new technology onto theirs. In time, the original technology now has many extensions added on to it, and what’s there is a conglomeration of solutions that work together in a slow and labored fashion.

The Ikon platform, on the other hand, is more like building blocks. Our customers can build what they need using only the pieces they require. And because there are so many pieces to choose from, they can build anything from a Resource Management tool, to an Automated Workflow to a Patch Management Tool and solve problems from any area of the business.

8. Can the Ikon platform work as an ITSM tool?

Absolutely. Using your preferred best practice methodology, Ikon enables the complete service management of business to bridge the gap between IT, Development and Operations. In fact, Ikon can not only replicate big name ITSM tools currently in the market but improve on them and for a fraction of the price.

9. Will Keross help us to decide what to do with the Ikon platform?

No. Keross offers the technology, Ikon, so that you can orchestrate your entire data landscape. We do not have the expertise in every industry to dictate where your bottlenecks may be, where there are inefficiencies, where you are exposed to risk or where the business needs to save money.

Many of our clients have an internal Data Officer to make sure data is flowing as it should to increase the bottom line by creating efficiency, or a Revenue Officer to increase top line growth by delivering new ways of making money using data. Other clients use external consultants to help them understand where they can improve and then come to us or to one of our channel partners to get started on specific projects.

10. How much does Ikon cost?

Like its technology, Keross is flexible in offering the right pricing solution depending on the needs of our customers. We look at whether you require an on-premise instance, the number of admin and end users, whether you need individual robotic servers and if you will be managing the development yourself or using our team, before arriving at a price. Of course, the more you work on the Ikon platform, the more economical it becomes – unlike the old philosophy of purchasing more and more IT products for your business, you will in fact be reducing the number of products needed as you migrate them onto Ikon and eventually replace them with your own versions. One fee for Ikon vs Multiple Fees for various programs that do not even integrate.

11. What analytics and visual dashboarding tools does Ikon offer?

The Ikon platform is proud to enable augmented analytics for your organization. Because it has access to all of your data in real time, it provides a true picture for business insights. Its advanced analytics means it can detect errors and anomalies, utilize Machine Learning algorithms for deep learning, or predict and prescribe on a batch, request or event basis. All of this is presented to you on your own, individual dashboard which you can create using drop and drag functionality to see the parts which are most important to you, on your own Ikon homepage.

12. Do all users have access to the same data?

No, unless that is what you want. Ikon is a true multi-tenant environment, meaning Admin Users control which data an End User can see and interact with. Some data may be sensitive, such as payroll information, and therefore the level of access will vary throughout your organisation. Once a level of access is set, the end user has the ability to customize their own homepage and even collaborate with other users.

For IT:

13. In a situation where we need to connect to an on-premise database server, the IT governance of our customers may not allow submitting access credentials in code snippets to Probes. What can be done?

Following are Ikon options:

  • Put the credentials directly in the script in plain text.
  • Provide a form with masked text boxes to the customer, where the customer puts in the credentials, and the probe script accesses the inputted credentials.
  • If the customer absolutely does not want the credentials to be submitted to Ikon, they may put them in a file in the local probe server (customer premise). The script will access the file to read the credentials before making a connection to the database.

14. When executing applications in customer data centers that can be controlled from the outside world (loading and executing code), how do we access specific parts of the IT landscape while ensuring data security for our customers?

Ikon can help. Perhaps the customer prefers a regulated incoming connection to their premise rather than having to run an application (the probe) in their premise. Maybe they would like the whole Ikon ecosystem on-premise. We can introduce the architecture of Ikon that the customer is most comfortable with.

15. Writing scripts can be quite large depending on the problem and business logic they contain. Does the Ikon editor and script-file per Probe allow good code management, sharing and code history insights?

In fact, there is no script file, no single script per probe and there’s even a fully-featured mechanism for maintaining code history. We are happy to show you during a demo.

16. Where exactly is the individual business logic sitting required to solve specific tasks, e.g. building average values over time series data and checking if they exceeded a defined threshold?

The business logic sits at “Hooks” during the runtime execution of the model; there are 20 odd such hooks for the models. For example, we will build average values of time series data on “Multipart data processor” hook, which is just a fancy name for the realtime data receiver from probes.

With relation to time-series analysis, in Ikon we would not do the downsampling in the business logic but instead delegate it to a time series database like influxdb. We have a showcase for this specific scenario and would be happy to show you during a demo.

17. How is the code, containing the business logic, deployed to Ikon?

There are no deployments. You change the codeand press CTRL-s in the online editor. In the next invocation, the new code gets executed because all code in Ikonis run by script engines. Of course, in practice, we will have separate development/UAT/production instances of Ikon.

18. Is there a build-in possibility which allows third party systems (e.g. customer legacy systems) to push data to Ikon in order not to have Probes installed? And how can Ikon receive this data and use business logic to deal with the data up on arrival?

Yes, the Ikon platform can directly receive data, or standalone “collectors” can be configured in probes (TCP/UDP/SOAP/REST/Normal HTTP). We have specifically introduced the notion of collectors in different protocols as many of our customers have asked for a “port of entry” to Ikon rather than Ikon pulling data from them.

19. What is your opinion on the use of Ikon for PoCs, both from an economic point of view and in terms of effort? Imagine a situation where a quick and cheap solution is required just to prove the case.

We welcome and encourage this approach. We believe this is the best way to demonstrate Ikon’s advantages. Because Ikon reduces build time by over 50% over older technologies, it is a very efficient way to develop use cases to demonstrate ROI.

20. Do you have authorisation functionalities built in (e.g. role management)?

Yes, and it’s completely visual and dynamic.

21. Do you have easy integration capabilities with oAuth providers?

We have integrated with SAML providers (AD Federation) and oAuth integration can be done just as easily. I should mention though, a user is the most central entity in Ikon. So, we will have to create a user entity in Ikon, but it can be synced under the hood with an oAuth provider.

22. Can you show us an IoT use case which has edge use cases implemented?

Yes, we have a use case where we receive live location data from different devices via different protocols (HTTP from one set of devices and MQTT from another set). The data receivers are configured in multiple probes. For this particular use case, the probes do not do any computation but relay the data on to the server. But they could very well do computations before sending it to the platform, as all of this is done in scripts.

23. Do you have any specific security and authorization model?

All communication to the Ikon platform has a single entry point via Apache Httpd server, which exposes REST/Websocket endpoints in SSL.
Ikon supports role-based access control “RBAC” and Service oriented workflow access control “SOWAC”, which is modeled per use case.

24. Do you have industrial data connectors/adapters (e.g. OPC UA, MQTT)?

An OPC UA connector will mean developing scripts in the platform using OPC UA library (like unified-automation) and submitting the jar files and the scripts.

25. Do you have any kind of knowledge graph / common object model / schema (data model integration)?

No. The schemas are use-case specific and modelled in the forms (UI components) as JSON, which automatically translate into mongodb documents in the platform. Any special handling of the data is done with the help of one or more of the databases. For example: for time series data, we will connect to influxdb from the server side scripts. But then again, the schema will be decided by the script.

26. Could you provide some information about query and true messaging pub/sub API (e.g. GraphQL, SparQL, Kafka)?

  • GraphQL : We do not support graphql directly as an interface although we support neo4j which supports graphql natively. But the query will have to pass through the middleware (the server-side scripts in Ikon) which will pass them onto the database.
  • SparQL : We have an integrated Apache Spark environment with probes. It is important to note that when we integrate a new technology stack with probes, it is a use case implementation for us. In this case, the first use case is to build the capability to interact with Apache Spark from Ikon applications. The second use case is the actual Ikon application which allows end-users to interact with Spark environments.
  • Kafka: Again, we will upload the kafka client jars to the platform and execute scripts in a probe to consume/produce data with Kafka. Then, depending on the requirements, we will either process the messages in the probe itself or relay them on to the platform.

27. Does any kind of app platform exist?

The application user interfaces created in Ikon are responsive HTML5 components and compatible with hand-held browsers. Specific apps can easily be created to interact with theIkon platform.

28. Is PaaS with white labeling possible?

Yes. We have partnered with various organisations to develop solutions which they then brand as their own and sell as products.These organisations are utilizing their domain expertise to productize parts of their service which can be sold over and over. Selling data tools specific to their industry means these businesses are creating a new revenue stream while allowing staff to get away from repeating the same services and back to adding new value. Perhaps you have an idea for an industry-specific software program which you know your clients would buy again and again to augment the work you already do?

29. Is container orchestration possible (to run everywhere)?

We normally install the Ikon platform in multiple Linux/Windows servers to distribute the application server stack and database servers. The rest servers are node-based, hence can be horizontally scaled by introducing more nodes. So we natively have some of the advantages of container managed deployment.