test
Search publications, data, projects and authors

Thesis

English

ID: <

http://hdl.handle.net/10251/39987

>

Where these data come from
Towards integrating multi-criteria analysis techniques in dependability benchmarking

Abstract

[es] Increasing integration scales are leading to a multitude of new devices and technologies, such as smartphones, ad hoc networks, and reprogrammable devices among others. The proliferation of these devices with improvements in autonomy and communication capabilities is paving the way for a new paradigm known as the Internet of Things, where computing is ubiquitous and devices cooperate and exchange information autonomously between them, and where IT infrastructures improve the well-being of people and society. This paradigm goes hand in hand with a large number of business opportunities for manufacturers, app developers, and service providers in areas as diverse as consumer electronics, transport or health. In line with this, and in order to take full advantage of these opportunities, industry now relies more than ever on the use and re-use of products developed by third parties, which allow them to reduce the time to launch the market and the costs for their products. In this race as the first to reach the market, colleagues are concerned about the reliability of components developed by third parties, as well as of the final products themselves, as unexpected failures could damage the reputation of the manufacturer and limit the acceptance of his new products. Therefore, assessment techniques adapted to the reliability context are being deployed to assess, compare and select, (i) those components that are best adjusted to be integrated into a new product, and (ii) the configuration parameters that provide the best balance between performance and reliability. However, although the trustworthiness assessment processes have been defined and applied to a wide range of application environments, precise and rigorous processes to carry out the decision-making process have not yet been established, thus hampering the objectives of such approaches: a fair comparison and selection of existing alternatives taking into account attributes of performance and reliability. Indeed, results from experimentation can be interpreted in many different ways depending on the context of use of the system, and the subjective judgement of the evaluator. Defining a clear and concise decision-making process is therefore a mandatory task to allow replicability of the findings. This final Masters’ work thus focuses on the process of integrating a decision making methodology into a common reliability assessment process. The challenges to be addressed include how to deal with industry requirements, obtaining a single measure to characterise the system, and with the requirements of academics, where it is preferable to obtain as many measures as possible to characterise the system, and how to navigate from one representation to the other without suffering a loss of relevant information. Delegated archive

Your Feedback

Please give us your feedback and help us make GoTriple better.
Fill in our satisfaction questionnaire and tell us what you like about GoTriple!