Skip to main content

· 5 min read
Alvaro Jose

On our previous installments, we discussed the smells that can happen when splitting microservices, and the strategies that exist to make them as independent as possible. But how do we define boundaries? How do we define the process that our microservice is in charge off?

Event Storming

Event storming is a technique that is part of DDD. But, what is Event storming?, the definition on Wikipedia is:

A workshop-based method to quickly find out what is happening in the domain of a software program. The business process is "stormed out" as a series of domain events.

This process is run with stickies in a physical or digital board during a session, and requires the 'experts' on the process to be present to provide the context what/whom/how. The outcome is an understanding of the business process, not the technical one. To be able to separate them into different steps with clear responsibilities.

Step-By-Step Guide

let's do an example of how a company sets up our internet connection

Prepare a board and the people for the session

Event storming requires people to share a common view and brainstorm and discuss on it. This process takes to count time as a dimension. And has multiple types of stickies that can be used.
You can see an example board on the next image:

Regarding the Stickies, their color represent a specific meaning[1]:

  • Events (orange): Represent the factual events and anything that is relevant to a domain expert.
  • Commands (blue): These are requests to do something. They can originate from a user or system or by another event.
  • System (pink): These represent systems involved in the domain. They may issue commands or receive commands along with triggering events.
  • User (yellow): These are human users involved in the process. They may be a single person or a department/team.
  • Aggregate (tan): This is the first level of categorization and can be thought of as the “thing” that a group of events operates on.
  • Read Model (green): This represents data that may be critical for a user or system to decide.
  • Policy (gray): These represent standards or rules that may need to be executed, such as rules for a compliance policy.

Define the Events of your system

Events are the most important information of our board. They represent facts regarding the process and helps encapsulate the knowledge of the 'experts'.
As we mention before, time is a significant dimension. A process always happens in a period of time. Starting by organizing this 'things' that happen in a timeline is a good way to start.

In our example, you can see on the previous image we go from checking coverage, to creating a user, to creating a contract and connecting our user to the network.

Identify the Systems involved (Optional)

The intent of this step is to identify the existing systems and their interdependency. When we discuss systems, they can be internal or external.

In our example, all starts with the website, but soon enough it becomes apparent most of the process is taken care by the monolith.

This step is optional in the case you have a greenfield. Nevertheless, I highly recommend it if you are splitting a monolith.

Add the Actors

These are real people who are part of the process, they tend to be the starting point of a chain of events, or even on a manual process we are trying to automate the executors of the individual step.


In our case, the user is the one starting the process, but there needs to be a technician doing the last steps manually.

Connect the dots with Commands

Now we are left with events that are done by someone and take effect in parts of our system. But we are missing the cause and effect that made this look this way.

Commands allow exactly this, is a specific action or decision that will push our system into a certain direction.

Commands can be positive or negative actions, causing bifurcation and showing different cases that our system needs to cope with.

Define Bounded Context

now we are left to define where each of the sub-process that conform our system starts and ends. This is done by grouping the stickies with an enclosing and giving a noun + verb to it, as it's a sub-process and it evokes action.

Now you have a set of split actions that can become their microservices and provide part of the process independently.

Create Capabilities Matrix (Optional)

Now, with the bounded context, we can start defining the capabilities of our services. This is straightforward to express in a matrix.

ContextCapabilities
Network ManagementCheck coverage
Enable Network
3rd party Hardware management integration
User ManagementCreate User
User Email Verification
contract ManagementCreate Contract
User Email Verification
3rd party digital signature integration

Devise your Goal Architecture (Optional)

Knowing our current architecture, it's good to think where we want to go.
This is not only a technical challenge, but an organizational challenge due to Conway's law. If we would like to be successful in splitting a monolith our communication, meaning the teams structure involved, need to resemble this target state.

Define a plan on how to split the Monolith (Optional)

A change so big as the one shown on the previous image can be overwhelming for an organization and create a paralysis and doubts. It's always good to split the problem in steps to understand progress and be always on a better state. This will improve morale.

[1] https://www.capitalone.com/tech/software-engineering/event-storming-for-microservice-architecture/

· 3 min read
Alvaro Jose

On the previous installment of this series, we discussed the pitfalls that could happen when we split a monolith into microservices. In specific, we talked about creating what are called microliths.

Given that you have followed the recommendations on designing your domains correctly. Today we are going to elaborate on patterns to remove that synchronous communication in between 'microservices'. This will help our services to become more resilient.

The Patterns

Circuit Breakers

The most simple solution we can go for is called circuit breakers. As it implies, is just a piece of code that upon multiple request failed to a downstream service will fail silently and allow service to resume their normal behavior.

What are we solving and what are we letting unsolved:

  • ✔️ We don’t fail continuously if some other service fails.
  • ❌ We silently don’t finish the entire process requested.
  • ❌ We require all chain of dependencies to be called.
  • ❌ We force other services to scale to our needs.
  • ❌ Data is mutable, so errors will be propagated and not solvable.

Outbox Pattern

The next level in solving our microlithic issue is to decouple our services using Pub/Sub to exchange models in between services.
Our service will consume and store the necessary information to run the process locally, and will broadcast the outcome models. This will mean there will always be a strong consistency in the outbox, and eventual consistency on the service database (if it exists).

What are we solving and what are we letting unsolved:

  • ✔️ We don’t fail continuously if some other service fails.
  • ✔️ We always finish our process and promise the rest will be done.
  • ✔️ We just require our service to do what we promise.
  • ✔️ Fast services will be fast, and slow services can go slow.
  • ❌ Data is mutable, so errors will be propagated and not solvable.

Event Sourcing

The last level is event sourcing. The idea is to use the events that generated a specific state and not use the calculated state that a service can provide us.

This allows a higher resilience due to the immutability of the data. In this case, calculation issues of the past can be solved, as we can reprocess the entire set of events that took us to a certain state.

Conclusion and follow-ups

These are some of the patterns that can make our services more independent and resilient. Nevertheless, each of them has a different complexity, meaning it also affects the complexity of our code. For this, we need to make sure we use the correct tool for the job.

· 4 min read
Alvaro Jose

The Monolith

We have all at this point encounter the big service that jumpstarted the business. It's always good to find it or know it existed. It shows that there was an intent to not resolve every architectural problem before we even knew we had a business.

Nevertheless, it tends to outgrow itself and become more a pain than a solution. Some of these pains are:

  • We all work on the same code base, and conflicts and side effects start to happen.
  • You need to release the entire solution, even if different teams have different cycles.
  • There are code freezes to go through validation cycles.
  • It scales as a whole, not only the portion that has an increase in traffic.

Due to these pains, microservices were created. To give team/domain independence to create focused solutions on a business that has already been validated.

The Microservices

Let's start with a definition of a microservice:

Microservices are an architectural and organizational approach to software development where software is composed of small independent services that communicate over well-defined APIs. These services are owned by small, self-contained teams.

microservices

All sounds like flowers and happiness when we talk about microservice. Nevertheless, does microservices solve the entire issue by itself?

Have you encountered the next cases in a microservice architecture?

  • Before we release a new version, we need to sync deploys with another team.
  • Our application was down, but is not our issue.
  • Our service was working and scaling fine until the team X started using us.
  • And more…

What is happening?

Microliths

The smells mention before are caused by what Jonas Boner call Microliths, a great word for what is happening here.
Even if we think this are 'independent' services, synchronous communication can cause side effects we don't want:

  • There can be cascading events between your services.
  • Your domain boundaries are not clear because you don’t own the entire process.
  • Slow services are forced to scale by faster services requirements.
  • There is additional latency on the network calls.

What got lost in translation?

Having microliths comes from multiple misconceptions we have. Some of them are:

Domains != Resources

Every so often, when we divide the monolith, we think about domains being resources. Due to how we normally have divided API's and DB's as we think about splitting what already exists and not about extracting the processes being achieved.

When thinking about a microservice, we should think about what part of the process it is trying to solve, this will help us define good boundaries for our bounded context.

When we think in a process, data is secondary. The process will require different pieces of existing data to fulfill their capabilities, and it is ok for it to own its copy of what is needed to fulfill his mission.

Independence != Single Source

A single source of data does not mean independence, as whenever your software requires complementary data, it will have to acquire it from somewhere else, what means a direct dependency. This also affects boundaries as you must enter other team's domain.

If you strive for independence, copy the information you require for your process, even if it exists somewhere else.

Fast != Synchronous

Humans think that a direct response is always faster than sending out a message. While occasionally this is true, in microservices this could start a cascade of synchronous calls from one service to the next one, leaving our users in a timeout limbo.

Think if really your system requires calling others directly or if you can message them to start their process.

Resilience != Complete

Making sure the entire process has been completed, is normally confused by resiliency. Resiliency only refers to the capability to complete the process.
If we have well-defined contracts in between our pieces, we don't need to finish things synchronously, we can promise our users things will happen. And let our services do their work at their speed.

Conclusion and follow-ups

Are we doomed?

The answer is no, we are not doomed! We can design our services with the correct division using some DDD tooling and also use the correct tools to decouple our microservices.
Let's talk about this on the next chapters of this series.

· One min read
Alvaro Jose

Video

Long Version

I am currently starting some new open-source projects and I feel it is tedious to do some recurrent tasks. For example:

  • Promote this blog post in social media
  • Announce a new release.

Power Automate & IFTTT integrations allow just this, by a process of action and reaction.

These systems provide:

  • Triggers: they are the starting point of what will happen after.
  • Actions: they react to previous steps on the described flow.

An example of this is the next flow:

image

image

  • In IFTTT, if a new feed item exists in the RSS of my blog, it's posted as a card in a Trello board.
  • The Power automate flow looks for new cards on that column.
  • Retrieve the content
  • post it into medium
  • Post into Twitter and LinkedIn about the new blog post.

As you can see, automation is cool and can save us a lot of effort by increasing our productivity.

· 5 min read
Alvaro Jose

Why are messages important?

Commit messages are part of the collaboration we do day to day inside a team, it works as a record of what has happened.

Every time you perform a commit, you’re recording a snapshot of your project that you can revert to or compare to later.

— Pro Git Book

Commit messages are used in many ways, including:

  • To help a future reader quickly understand what changed and why it changed
  • To assist with easily undoing specific changes
  • To prepare change notes or bump versions for a release

All three of these use cases require a clean and consistent commit message style.

Easy Commit messages with Commitizen

This tool purpose is to define a standard way of committing rules and communicating it. The reasoning behind it is that it is easier to read, and enforces writing descriptive commits. Removing the ambiguity of options and the mental load of following the standard manually.

Commitizen will prompt you a series of questions that will generate the final commit message. It has multiple adapters, in my case I prefer to be controlling the questions, so I use cz-format-extension.

You can add commitizen to your project with the next command line

npm install commitizen --save-dev # npm
yarn add commitizen -D # Yarn

Add any of the available adapters, in my case cz-format-extension:

    npm install cz-format-extension --save-dev # npm
yarn add cz-format-extension -D # Yarn

In your package.json you will need to add the next section:

  ...
"config": {
...
"commitizen": {
"path": "cz-format-extension"
}
}
...

The Adapter cz-format-extension allows a massive flexibility as the questions can be defined in a .czfrec.js file. An example is:

const { contributors } = require('./package.json')

module.exports = {
questions({inquirer}) {
return [
{
type: "list",
name: "type",
message: "'What is the type of this change:",
choices: [
{
type: "list",
name: "type",
message: "'What is the type of this change:",
choices: [
{
"name": "feat: A new feature",
"value": "feat"
},
{
"name": "fix: A bug fix",
"value": "fix"
},
{
"name": "docs: Documentation only changes",
"value": "docs"
},
...
]
},
{
type: 'list',
name: 'scope',
message: 'What is the scope of this change:',
choices: [
{
"name": "core: base system of the application",
"value": "core"
},
{
"name": "extensions: systems that are observed",
"value": "extensions"
},
{
"name": "tools: other things in the project",
"value": "tools"
},
]
},
{
type: 'input',
name: 'message',
message: "Write a short, imperative tense description of the change\n",
validate: (message) => message.length === 0 ? 'message is required' : true
},
{
type: 'input',
name: 'body',
message: 'Provide a longer description of the change: (press enter to skip)\n',
},
{
type: 'confirm',
name: 'isBreaking',
message: 'Are there any breaking changes?',
default: false
},
{
type: 'input',
name: 'breaking',
message: 'Describe the breaking changes:\n',
when: answers => answers.isBreaking
},
{
type: 'confirm',
name: 'isIssueAffected',
message: 'Does this change affect any open issues?',
default: false
},
{
type: 'input',
name: 'issues',
message: 'Add issue references:\n',
when: answers => answers.isIssueAffected,
default: undefined,
validate: (issues) => issues.length === 0 ? 'issues is required' : true
},
{
type: 'checkbox',
name: 'coauthors',
message: 'Select Co-Authors if any:',
choices: contributors.map(contributor => ({
name: contributor.name,
value: `Co-authored-by: ${contributor.name} <${contributor.email}>`,
}))
},
]
},
commitMessage({answers}) {
const scope = answers.scope ? `(${answers.scope})` : '';
const head = `${answers.type}${scope}: ${answers.message}`;
const body = answers.body ? answers.body : '';
const breaking = answers.breaking ? `BREAKING CHANGE: ${answers.breaking}` : '';
const issues = answers.issues ? answers.issues : '';
const coauthors = answers.coauthors.join('\n');

return [head, body, breaking, issues, coauthors].join('\n\n').trim()
}
}

The file creates a process of questions for:

  • type: align with semantic release message specification
  • scope: affected part of the application
  • message: the imperative written message
  • body: longer description
  • breaking: to determine if it's a breaking change for semantic release
  • Issue: related issue of our ticketing system
  • Co-Authors: people working in the tasks while pair programming

All these questions are asked interactively and not by the brain power of doing manual work.

And you can then add some nice npm scripts in your package.json file pointing to the local version of Commitizen:

  ...
"scripts": {
"commit": "cz"
}

This will be more convenient for your users because then if they want to do a commit, all they need to do is run npm run commit and they will get the prompts needed to start a commit!

NOTE: If you are using precommit hooks thanks to something like husky, you will need to name your script something other than "commit" (e.g. "cm": "cz"). The reason is because npm scripts has a "feature" where it automatically runs scripts with the name prexxx where xxx is the name of another script. In essence, npm and husky will run "precommit" scripts twice if you name the script "commit", and the workaround is to prevent the npm-triggered precommit script.

That is all :) . I will do a special mention to commitlint that is a very useful tool to lint commit messages. I do not use it anymore as it has some overlap with commitizen.

· 3 min read
Alvaro Jose

What & Why Git hooks?

Git hooks are scripts that Git executes locally before or after events such as commit, push, and receive.

These hooks are completely programmable trough bash scripting. Examples of what can be done:

  • pre-commit: Enforce project coding standards.
  • pre-push: Run tests.

This allows us to make sure we are committing the correct things at the correct time. Not breaking our code just because of the mental load of doing things as a manual process that can be forgotten.

How to Start

Add Husky

Husky is a tool that allows Git hooks using JavaScript configured using individual files for hooks in a .husky/ directory.

The fastest way to install husky is by using husky-init, a one-time command to quickly initialize a project with husky:

npx husky-init && npm install       # npm
npx husky-init && yarn # Yarn 1
yarn dlx husky-init --yarn2 && yarn # Yarn 2+
pnpm dlx husky-init && pnpm install # pnpm

It will set up husky, modify package.json and create a sample pre-commit hook that you can edit. By default, it will run test when you commit.

To add another hook, use husky add.

If you are not comfortable using husky-init you can find other options here.

Add lint-staged

Husky is very useful, but it will run natively to git and not focus the commands in our bash scripts for all the files, not only the ones we want to commit.

Lint Staged appear to resolve this problem. It allows you to run the process against staged git files that match a pattern.

asciicast

Install lint-staged by adding it to your local project.

npm install lint-staged --save-dev
yarn add lint-staged -D

In your package.json add it as a script("lint-staged": "lint-staged",) and refer it through a pre-commit hook. If using Husky, this can be found in .husky/pre-commit with the next content:

#!/bin/sh
. "$(dirname "$0")/_/husky.sh"

yarn lint-staged

There are multiple ways to configure lint-staged. One of them is having a lint-staged.config.js file in your project root folder. In this file, you can express what process you want to run for what types of files. For example:

module.exports = {
'*.{ts,tsx}': [() => 'yarn tsc:check', 'yarn format', 'yarn lint:fix', 'yarn test', 'git add .'],
};

The previous snipped runs the compiler check, formatting, linting and test before adding the fixed staged files to the current commit.

Conclusion

With this two tools, we will now be pushing code that will pass similar checks than our CI/CD system.

· 4 min read
Alvaro Jose

Over the last few years, some practices appear to be more a dogma than a value adding practice. One of this is Pull Requests.

Why PR's exist

  • Malicious Code Prevention: Pull requests exist mostly as a practice accepted for zero trust environments (ex. Open Source). An attack vector on this type of environment is the ability of anyone to contribute, meaning you could inject code that could create known vulnerabilities that packages will inherit. That is why maintainers validate code from unknown users.

Malicious actors

  • Highly Distributed Teams: PR's can be use for highly distributed teams (around the clock) as a way to do knowledge sharing. If someone in side A of the world can follow and understand the changes for a feature that is being developed in side B of the world.

Distributed Teams

The issue

IS there any value of doing PRs when people work collocated? What is the cost of PRs in trust environments?

The value that normally people give to PRs is the one of having a peer review process. Nevertheless, we will see later in this article that there are less invasive ways to do this.

Some costs of PRs are:

  • Slow Delivery: PRs are a start and stop strategy where there is a gateway to merge code. This is time that needs to be taken by developers (writting & preparing a PR) and reviewers (reviewing, commenting, etc) to go through the process. At the same time is more time the code takes to get to production (merging, re-testing, etc). This is significant for features but also for fixes, meaning you can go from a response time of minutes to hours.
  • Isolation work: When working on branches, devs work on code that works isolated but needs to be merged with a continuous stream of changes. This means that any test isolated will probably be invalidated upon merging.
  • Lack of ownership: As work is done isolated, the developer who creates a PR delegates part or the responsibility to the reviewer. Humans don't have compilers or containers to run the code in our brain, meaning catching errors tends to be out of our realm.
  • Egos: As catching errors tends to be out of our human realm, PRs tend to become a thing related to preferences (Style, patterns, etc). This hardly provides any value to the code as either tools like linters can do this automatically or it brings premature optimizations.
  • Late feedback: Any valid recommendation is actually provided quite late in the process, when the code has already been written and validated. Causing rework that takes time.

The Alternatives

Pull requests are just one of the asynchronous peer code reviews styles. Is not the only way of doing peer reviews.

Some other types of peer reviews that I, personally, like are:

  • Over-the-shoulder: The bases of this is to have a conversation over what has been or is being implemented. This creates a synchronous feedback loop on an async process. It does not fix all the shortcomings of a PR, but it creates a faster feedback loop.
  • Pair Programming / Mob Programming: The idea is that multiple developers work in the same code base in the same computer, creating a synchronous feedback loop in a synchronous process. This with Trunk-based development allows very fast feedback loops at product level, and with the correct tools generates resilience and ownership among developers.

The disclaimer here is I have worked doing pair programming, TDD and trunk-based development for more than 5 years in multiple size companies, and it has always been a bliss.

· 3 min read
Alvaro Jose

As a member of the community that like to generate npm packages like libraries and cli tools, sometimes is difficult to maintain everything and keep your package up to date in the dependencies side. I am a fan of having static dependencies as versioning is not being held correctly in most of the npm world. So if you dont use exact packages you could run in the issue that a broken change makes from the night to the morning your awesome tool to break.

This practice could bring a headache to keep dependencies up to date because is a manual process. And manual process tend to be time consuming (at this point in time I have ~17 npm packages) meaning that if i want to simply do normal maintenance i will have to run everything for all those in maybe weekly or monthly bases.

So is a bit of a no situation for maintainers, but if you dont maintain your package people will not use it, because there is a concern about how active the project is, even if there are no open issues. For solving both of this things what i have decided is to ad to my CI/CD pipeline a script that runs only on cron jobs from travis ci.

os: osx
language: node_js
node_js:
- node
script:
- yarn test:cov
after_success:
- if [[ "${TRAVIS_EVENT_TYPE}" = "cron" ]]; then ./upgrade.sh; fi
deploy:
skip_cleanup: true
provider: npm
email: $NPM_EMAIL
api_key: $NPM_TOKEN
on:
tags: true

as you can see that is the normal .travis.yml for deploying into npm (you will have to define NPM_EMAIL and NPM_TOKEN as enviroment variables in your build configuration), the main diference is the step after success that if its the cron job going will run the next script.

#!/bin/sh

set -e

git config --global user.email $GH_EMAIL
git config --global user.name $GH_USER

git remote add origin-master https://${GH_TOKEN}@github.com/${TRAVIS_REPO_SLUG}.git > /dev/null 2>&1

git fetch origin-master
git checkout -b master-local origin-master/master

yarn upgrade --latest
git add .
git commit --allow-empty -m "updated dependencies [skip ci]"

yarn test
yarn version --patch

git push --quiet origin-master master-local:master
git push --quiet origin-master master-local:master --tags

this script attaches the current state to a branch makes, upgrades the dependencies and if everything works fine generates a new commit and deploy a patch of the packages (you will have to define GH_EMAIL, GH_USER and GH_TOKEN as environment variables in your build configuration).

· 2 min read
Alvaro Jose

I have just finished migrating my static blog from Hexo to Hugo and one of the main things I care about is to be able to do continuous deployment of my profile and blog. There are quite a few blog posts out there but they are based on using shell scripts and it really becomes a pain to give permissions etc. In the next few lines you will see the simplest way I have found to do this (and is currently as this blog post is being published).

You will need to have:

  • A Github account.
  • A Travis CI account.
  • A Github repository with source code of your web page with Hugo (*1)
  • A Github repository with the name <your User or Organization>.github.com (ex. kanekotic.github.com) (*2).
  • A developer token from GitHub with commit capabilities (can create in github Settings -> Developer Settings -> Personal Access Token -> Generate New Token )

I wont cover how to create a Hugo web page as this is best explained in the quick start) of Hugo.

After you are happy with the content of your blog in the repository of source code (*1), and want to start deploying you will need to add a .travis.yml with the next content

sudo: true
dist: trusty

install:
- sudo apt-get --yes install snapd
- sudo snap install hugo

script:
- /snap/bin/hugo

deploy:
provider: pages
local-dir: public
repo: <User or Organization>/<User or Organization>.github.com
target-branch: master
skip-cleanup: true
github-token: $GITHUB_TOKEN
committer-from-gh: true
keep-history: true
on:
branch: master

you will have to change the repo content to match your destination repository (*2). The previous code what does is installs hugo in the deployment machine, builds your web page and deploys using the pages plugin. If you have a custom domain make sure to set the property fqdn to your domain, if not you will overwrite this field in each commit.

After adding the file you will have to go to Travis web page and toggle your code repository (*1) got to More Options -> Settings -> Environment Variables and add GITHUB_TOKEN as the token retrieved from github.

After this in any commit in the master branch of your web page you will get it deployed and go live.

· 2 min read
Alvaro Jose

I have hit a corner case of extension methods and LINQ. Today I was declaring some extension methods to make my code more readable.So I created an extension method that gets an object and performs a direct cast:

public static class GeneralExtensions
{
public static T Cast<T>(this object o)
{
return (T)o;
}
}

The intention was to be able to call my direct castings by something like this:

MyObject.CastTo<MyInterface>();

It happens that in the same namespace I have an extension method that has a LINQ expression

using System;
using System.Collections.Generic;
using System.Linq;

public static class EnumExtenstions
{
public static IEnumerable<string> UseLinq(this IEnumerable<object> collection)
{
return (from object value in collection select value.ToString() ).ToList();
}
}

Adding this first extension method to my code base cause the next error

Error   CS1936  Could not find an implementation of the query pattern for source type 'object'.  'Select' not found.

Having both extension methods in different namespaces (and not referred), or rename Cast<T> to something different solves the issue. This is caused for an overlap of the extension methods where the nearest one to the code is the one called.

##How bad Extension Methods over object could go?

This is an extract from the answer of Eric Lippert, regarding the code:

public static class GeneralExtensions
{
public static T Cast<T>(this object o)
{
return (T)o;
}
}

Side Effects of the cast<T>:

  • Cast<int>(123) unnecessarily boxes the int, (int)123 does not.
  • Cast< short >(123) fails but (short)123 succeeds. There is no conversion from a boxed int to a short.
  • Suppose you have a user-defined conversion from Animal to Shape. Cast<Shape>(new Tiger()) fails but (Shape) new Tiger() succeeds.
  • Suppose q is a nullable int that happens to be null. Cast<string>(q) succeeds! But (string)q would fail at compile time
  • Etc

Cast method has some overlap with the real cast operator but is not a substitute for it. To capture the semantics of the cast operator there is a need to use dynamic, which starts the compiler at runtime and does the compile time analysis on runtime types.