In this paper, we present a framework capable of supporting Rapid Information Modelling. The management of information is done at the conceptual level without the user having to define a data model for information organisation. We introduce the notion of heterogeneous collections together with a flexible notion of typing. One can easily define new typing constraints according to user needs and integrate them into the framework. To support the processing and querying of information, we provide algebra operations which can be evaluated with or without type checking. An initial version of the framework has been implemented as a web application offering flexible access to information.
View lessNowadays, data can be represented and stored by using different formats ranging from non structured data, typical of file systems, to semi-structured data, typical of Web sources, to highly structured data, typical of relational database systems. Therefore, the necessity arises to define new tools and models for uniformly handling all these heterogeneous information sources. In this paper we propose both a framework and a conceptual model which aim at uniformly managing information sources having different nature and structure for obtaining a global, integrated and uniform representation. We show also how the proposed framework and the conceptual model can be useful in many application contexts.
View lessThe thesis focuses on the interoperability of autonomous legacy databases with the idea of meeting the actual requirements of an organization. The interoperability is resolved by combining the topdown and bottom-up strategies. The legacy objects are extracted from the existing databases through a database reverse engineering process. The business objects are defined by both the organization requirements and the integration of the legacy objects.
View lessIn this thesis we describe the UQoRE method which supports database reverse engineering by using a data mining technique. Generally, Reverse Engineering methods work by using information extracted from data dictionaries, database extensions, application programs and expert users. The main differences between all these methods rely on the assumptions made on the a-priori knowledge available about the database (schema and constraints on attributes) as well as the user competence. Most of them are based on the attribute name consistency. This paper presents a method based on user queries. Queries are stored in a “Query Base” and our system mines this new source of knowledge in order to discover hidden links and similarity between database elements.
View lessThe goal of the thesis is to solve the task of defining a flexible query language for distributed structured and semistructured data and to implement this language. For example, data can be represented as HTML files in the file system, whereby an XML document contains metadata about these files. Hence, these two data sources have to be merged appropriately. In order to satisfy the needs of the heterogeneous group of users, a flexible adaptable query language is required. Therefore a query model is developed which serves as a basis for an abstract syntax for query languages; this abstract syntax can be translated in concrete textual as well as graphical query languages. An implementation of the ideas proposed in the thesis is done in the area of virtual courses, especially in the project ”Virtual University of Applied Sciences”.
View lessResearch in schema evolution has been driven by the need for more effective software development and maintenance. Finding impacts of schema changes on the applications and presenting them in an appropriate way are particularly challenging. We have developed a tool that finds impacts of schema changes on applications in object-oriented systems. This tool displays components (packages, classes, interfaces, methods and fields) of a database application system as a graph. Components potentially affected by a change are indicated by changing the shape of the boxes representing those components. We have evaluated the tool by own judgement on a real-life application and by a controlled student experiment. Our results indicate that identifying impacts at the level of fields and methods can reduce the time needed to conduct schema changes and reduce the number of errors compared with identifying impacts at the level of classes. Moreover, the subjects of the experiment appreciated the idea of visualizing the impacts of schema changes.
View lessKnowledge Management (KM) is an important issue in organizations. However there are several barriers to successful KM. In particular, knowledge hoarding, difficulties in identifying organizational knowledge, not understanding KM requirements, and technical difficulties of knowledge representation. In this work we focus on a connection between the managerial and technical aspects of knowledge management. We study the nature of organizational knowledge in order to derive knowledge management requirements to support the design of computerized Knowledge Management Systems. The work consists of three parts: 1) Defining organizational knowledge that needs to be managed. 2) Using the definition of organizational knowledge and its attributes to identify knowledge management requirements. This involves identifying the various facets of knowledge as well as the perceived meta- knowledge requirements of users. 3) Deriving guidelines for the efficient design of knowledge management systems.
View lessContemporary workflow-management systems cannot represent change or evolution of business processes. When a change is needed due to external reason, an offline procedure is invoked in order to create a new workflow engine template for the future instances in the workflow enactment module. The standard interfaces do not deal with the business process metadata in a way that can actually change it as a reaction to inbound knowledge. There are many relevant cases, especially in the virtual enterprise arena, where the business process is not deterministic and is influenced by external parameters (such as the selection of virtual partners), so the knowledge of what should be done is available, however it is external to the system. There is a need to develop a modeling mechanism that enables to transfer process definitions in an automatic way, without the need for human interference. One way of confronting with these issues is the use of a rule-based engine to monitor business process execution. This engine will contain internal meta-rules that refer to metadata entities, i.e. rules that describe how to act on other rules (business process routing) when a change is detected, while executing all needed consistency checks.
View lessGiven the increasing importance of globally distributed software development (GDSD) over the last decade, it is surprising that empirical research in this area is still in the very early stage. The few existing suggest that traditional coordination and control mechanisms can be effective for these projects only with support from appropriate information technology. However, at present, little is known about the success of current Information and Communication Technology (ICT) support in the context of GDSD projects. Therefore, the main question this research addresses is what ICT-based support is appropriate for globally distributed software development projects? The objectives of this research are to elicit and develop the functional requirements for ICT support for GDSD projects, to analyze the gap between existing tools and these requirements, and to develop an Internet-based integrated architecture of tools that would fill these gaps.
View less