Thesis project: An operational alignment model for developing business performance dashboards

Business performance dashboards are a well-known solution to support business decision making. However, developing such a dashboard is not applying an off- the-shelve solution. Questions that raise are for example: what KPIs should the dashboard contain? What data should be used? What is the most appropriate visual to display the information? What functionalities should the dashboard contain? Therefore, the aim of this research is to evolve an operational alignment model for developing business performance dashboard: Cross-industry standard process for business performance dashboards (CRISP-PD). This research presents a first exploration of this model.

Alignment between the requirements of the business and the IT solution is essential when designing a dashboard. Alignment is guaranteed in every step of the development process that is presented in this research. This is accomplished by implementing feedback mechanism between the business and IT.

First, literature is reviewed to build a theoretical framework. This review contains the concept of alignment and theory about business performance dashboards. Second, the alignment model, based on the literature, is presented. Finally, to verify the model, it is applied in a case study. The goal of developing a business performance dashboard in the case study is supporting business decision making at commercial departments.

The case study resulted in a concept of the business performance dashboard which is evaluated by the problem owners. Since developing a business performance dashboard is an iterative process, the dashboard is not finalized. Changing circumstances (e.g. market conditions, business goals, information need) can cause changes in the design of the dashboard. Therefore, several recommendations are made for further developing the dashboard.

However, this research has some limitations. First, CRISP-PD is not fully applied in the case study. Second, the case study is based on a limited data set. Therefore, basic statistical parameters could not be calculated. Third, CRISP-PD is only applied in one case study and therefore lacks reliability.

In addition, recommendations are made for future research. Since this is an operational alignment model, nothing is said about strategic and tactical performance dashboards. Therefore, in future research the alignment model should be tested for aligning strategic and tactical performance dashboards. Furthermore, the model should be validated in other case studies in order to improve the research reliability.

Yannick Visser is working on this topic.


Thesis project: Models for the prediction of overdue invoices

Keeping a healthy cash flow is extremely vital for businesses of every size and kind. One of the biggest influences in keeping cash flows healthy is paying invoices on time and getting paid on time. Some researchers suggest that getting paid on time would actually prevent the collapse of over 50.000 small business every year. And this is even before the economic uncertainty caused by COVID-19.

Many companies, including multiple payment providers, reported rising payment periods during the first and second lockdown periods. To keep payment periods short and cash flows steady, companies can start paying more attention to the Accounts Receivable processes. However, deciding which clients or invoices to put time and resources in can be a tricky process. To solve the problem of determining which clients need to be contacted it needs to be determined which invoices are likely to be overdue. By applying the methodological framework CRISP-DM, different Machine Learning models were studied to predict which invoices are likely to be overdue. For building these models a dataset consisting of 290.000 invoices from a Dutch top 30 accounting firm was used.

After thoroughly executing the processes of data preparation, feature selection and applying different techniques of feature engineering and hyperparameter tuning, it is concluded that weighted Random Forest models yield the best predictive performance. When evaluating these models, historical behavior of clients is determined to be the best predictor of overdue invoices. Interestingly though, models that solely rely on client demographics without any historical behavior also predict overdue invoices relatively well. This means even for new clients without any historical record, relatively accurate predictions can also be provided.

Frank van den Berg is working on this topic. LinkedIn:

Thesis project: Explanatory analytics in business dashboards – a comparison of explanatory models

Business dashboards are visual products of business intelligence and analytics, that are used to support decision-making. This thesis studies how dashboards can be extended with explanatory analytics. Explanatory analytics are automated diagnostics that generate probable explanations to a problem, often an exceptional value. This is especially important with the advent of big data and maturing dashboard technologies.

This thesis is conducted by design science research, where an artifact is created based on previous theoretical grounding, and then evaluated through business experts. Three separate models for explanations from different fields are compared, namely explanation formalism, informative summarization, and explanation by intervention. First, the models’ theoretical bases are detailed and compared. Then the extension is planned by the use of UML diagrams, and implemented through Python using object-oriented programming and Microsoft Power Bi as the dashboarding platform. This implementation is then evaluated with business experts, through semi-structured qualitative interviews.

As a result, it is found that business dashboards can be extended with explanatory analytics, and that the three models share many functions, while differing in others. The main differences found are the recursion logic, measure of impact, and visualisation of the models. Explanation formalism uses top-down recursion logic, with a measure of impact based on the absolute difference of actual and reference value and has a visualisation in the form of an explanation tree. Informative summarization, in contrast, uses bottom-up logic, with impact measure of both magnitude and ratio, and the result is in the form of a table. Explanation by intervention has no recursion, but calculates everything with a big bang method, measuring the impact by ratio, and the result is in the form of table. With a qualitative evaluation it was found that most business experts prefer the use of absolute difference in the measure and having a visualisation such as explanation tree to speed up the assimilation of information. Ratio as a measure of impact was seen as including insignificant explanations when solving business problems.

Key wordsBusiness dashboard, Business intelligence, Analytics, Explanatory analytics

Aaro Askala ( has worked on this topic.

Thesis project: creating HR talent demand insights by classification of job requisitions by applying dictionary matching on job descriptions

Due to the rapidly changing internal and external environment, organizations are constantly challenged to find the most qualified employees. In order the stay ahead of competitors, recruiters must find talent in order to achieve the organizational goals. In order to find qualified employees strategic recruitment decisions are made. For these decisions HR managers require insights in the external supply of talent and the internal demand of talent.

HR analysis enables data-driven decision making for HR managers by creating insights based on the available recruitment data. One of the data sources used by HR analysts is the data from job requisitions, which is used to create insights for recruitment. The structured data alone from the job requisitions, does not contain the necessary data for HR analysts to create insights about the talent demand of the organization. This data is in the unstructured job description, where the skills and studies requirements are specified that an applicant must meet. In order to use this data, the job requisitions should be classified by the skills and study requirements mentioned in the job description.

Different methods have been considered to extract the studies and skills from the job description, but in the end dictionary matching is chosen. Literature research showed that dictionary matching was the most accurate for smaller datasets. This method was applied to a case study with internal job requisition data. This resulted in a process covering the phases: data collection, data preparation, information extraction and classification.

After validation of this process with a precision and recall analysis it is concluded that dictionary matching on job descriptions to acquire classifications for the job requisitions is accurate. In addition, there are indications that the insights created with the job requisitions classified by the skills and study are valuable for data-driven recruitment decisions.

Rody Franken ( is working on this topic.

Thesis project: the prediction of voluntary employee turnover

This thesis examines how the prediction of voluntary employee turnover could bring value to organisations. A case study was performed with data of Deloitte Holding B.V., consisting of employee records. Four classification models were used as predicting methods. CRISP-DM was used as guiding principles for the application of data mining. The data set was re-sampled as it showed to be imbalanced. Based on F1 score as leading performance measure, it was concluded that Random Forest was the best predicting model for Deloitte. Literature pointed out that voluntary employee turnover was shown to be dysfunctional. Hence, there was concluded that decision trees empowers organisations to identify profiles that form a ‘risk’ for the organisation. Organisations can use decision trees as insights in order to develop effective policies and strategies for retaining employees. However, voluntary employee turnover remains a complex phenomenon, which is only able to explain a small percentage of the variance of the actual turnover decision.

Keywords: voluntary employee turnover, classification, imbalanced data

Koen Geerding ( is working on this topic. LinkedIn:

Thesis project: Disambiguation of scientific literature in patent data: an entity resolution process in Patstat

The patent system acts as a policy instrument to encourage innovation and technical progress by providing protection and exclusivity. The Worldwide Patent Statistical Database, henceforth PATSTAT, is a product of the European Patent Office designed to assist in statistical research into patent information. The TLS214_NPL_PUBLN table (referred to as TLS214) contains additional bibliographic references of patents. TLS214 contained over 40 million records in the 2019 Spring Edition, however, they are often duplicated or inaccurate. The disambiguation method was developed in an SQL environment with a codebase written in T-SQL and C# on a 2014 version of PATSTAT. This study evaluates the disambiguation method on a 2017 version of PATSTAT and presents the translation of the codebase to Python.

As a result, the first part of this study focuses on the performance and bottlenecks of the disambiguation method in SQL. The performance is measured with precision, recall, F1-score, and cosine measure on a golden sample of the TLS214 table. The second part of this study focuses on converting a part of the codebase to Python. To transform the method towards a generic disambiguation method several adjustments are proposed. The codebase of the disambiguation method is transformed into Python and extended with the proposed adjustments. After presenting the Python codebase, the performance of the generic disambiguation method is measured.

With the initial parameters, the disambiguation method in Python remains conservative, because it values precision over recall. The connected components and MaxClique algorithm are equal in terms of performance, however, connected components require less computation time. By altering the rule parameters simulated annealing increases the F1-score with approximately 16.5%. After optimization, it is concluded that altering the rule parameters does not solve the recall problem, because the rule-based scoring system remains unable to find sufficient evidence to create pairs.

Colin de Ruiter ( is working on this topic.

Thesis project: Detecting unusual journal entries in financial statement audits with auto-encoder neural networks

Every once in a while, a new news item appears reporting a new case of fraud and the many affected victims. According to the Association of Certified Fraud Examiners, fraud has caused more than $7 billion in total losses in 125 different countries between January 2016 and October 2017.  The current fraud detection techniques are based on heuristics and past experience, so the main issue is that new types of fraud cannot be detected. The thesis aims to introduce a new method for fraud detection which resolves the downsides of the current methods, namely auto encoder neural networks. This method is explored by first realizing a replication study, upon which adversarial auto encoders are implemented to attempt to exceed the results of the existing studies.

At the moment Joyce Hendriks ( is working on this topic.

Thesis project: Develop a diagnostic analytic tool to enrich business dashboards

Supervisor: Emiel Caron

The central research question of this thesis is: ‘How can (explainable) AI analytics enrich dashboards by automatically alarming for exceptional values, analyze root-causes and potentially even recommend fixes or suggest optimizations processes? ‘

The aim of this research is to enrich business dashboard with integrated explanatory analytics that are supported by mathematical models developed in Python. As a starting point, the model that is proposed by Daniels & Feelders in “A general model for automated business diagnosis” will function as our basis model.  Throughout this research several subjects are tackled, for example detection of exceptional values, various techniques to prune explanations and how to extract mathematically business models from data.

At the moment Claire Vink ( is working on this topic.

Developing a generic method for data disambiguation in large bibliographic databases

Supervisor: Emiel Caron

Many databases contain ambiguous and unstructured data which makes the information it contains difficult to use for researchers. In order for these databases to be a reliable point of reference for research, the data needs to be cleaned and disambiguated. Entity resolution focuses on disambiguating records that refer to the same entity. In this thesis we study different applications of entity resolution in order to explore a generic method for cleaning large databases. We implement the method on table TLS214 of the database PatStat. PatStat is a product of the European Patent Office that contains bibliographic information on patent applications and publications. Table TLS214 of the database holds information on citations to scientific references. The method starts by pre-cleaning the records of table TLS214 in SQL and extracting bibliographic information. Next, the data is transferred to Python where we make use of the TF-IDF algorithm to compute a string similarity measure. We create clusters by means of a rule-based scoring system. Finally we perform precision and recall analysis using a golden set of clusters and optimize our parameters with a genetic algorithm.

At the moment Wenxin Lin ( is working on this topic.

Explanatory Analytics in Business Dashboards to improve business decision-making

Supervisor: Emiel Caron

The central question of this research is: ‘how can business dashboards be extended with explanatory analytics capabilities to support business analysts in answering managerial questions? ’. The relevance for answering this question for business comes from the lack of explanatory functions in current business intelligence tools, and more specifically in business dashboards. This is a problem for business analysts that must browse large amounts of data visually to discover interesting patterns. Furthermore, visual analysis is not only slow and expensive, but also highly subjective through the bias of the analyst (Fayyad, Piatetsky-Shapiro, & Smyth, 1996). Lastly, explanatory functions can help the effort to move to a more data-driven decision-making process. Tools that can assist in the effort of making these tasks easier and quicker are valuable.

The objective of this design-oriented research is to develop applications that can extend current dashboard solutions like MS PowerBI with functions to 1) find exceptional values in the dashboard and 2) give explanations why the exceptions have occurred.

Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). The KDD Process for Extracting Useful Knowledge from Volumes of Data. Communications of the ACM, 39(11), 27–34.