899 resultados para Output-only Modal-based Damage Identification
Resumo:
Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.
Resumo:
Part 4: Transition Towards Product-Service Systems
Resumo:
Strawberries harvested for processing as frozen fruits are currently de-calyxed manually in the field. This process requires the removal of the stem cap with green leaves (i.e. the calyx) and incurs many disadvantages when performed by hand. Not only does it necessitate the need to maintain cutting tool sanitation, but it also increases labor time and exposure of the de-capped strawberries before in-plant processing. This leads to labor inefficiency and decreased harvest yield. By moving the calyx removal process from the fields to the processing plants, this new practice would reduce field labor and improve management and logistics, while increasing annual yield. As labor prices continue to increase, the strawberry industry has shown great interest in the development and implementation of an automated calyx removal system. In response, this dissertation describes the design, operation, and performance of a full-scale automatic vision-guided intelligent de-calyxing (AVID) prototype machine. The AVID machine utilizes commercially available equipment to produce a relatively low cost automated de-calyxing system that can be retrofitted into existing food processing facilities. This dissertation is broken up into five sections. The first two sections include a machine overview and a 12-week processing plant pilot study. Results of the pilot study indicate the AVID machine is able to de-calyx grade-1-with-cap conical strawberries at roughly 66 percent output weight yield at a throughput of 10,000 pounds per hour. The remaining three sections describe in detail the three main components of the machine: a strawberry loading and orientation conveyor, a machine vision system for calyx identification, and a synchronized multi-waterjet knife calyx removal system. In short, the loading system utilizes rotational energy to orient conical strawberries. The machine vision system determines cut locations through RGB real-time feature extraction. The high-speed multi-waterjet knife system uses direct drive actuation to locate 30,000 psi cutting streams to precise coordinates for calyx removal. Based on the observations and studies performed within this dissertation, the AVID machine is seen to be a viable option for automated high-throughput strawberry calyx removal. A summary of future tasks and further improvements is discussed at the end.
Resumo:
In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methods. In a second step a multi-scenario experiment was carried out to study the behaviour of the models when they are used for the purpose of operational improvement. Overall we have found that for our case study example both, discrete event simulation and agent based simulation have the same potential to support the investigation into the efficiency of implementing new management policies.
Resumo:
In this paper, we investigate output accuracy for a Discrete Event Simulation (DES) model and Agent Based Simulation (ABS) model. The purpose of this investigation is to find out which of these simulation techniques is the best one for modelling human reactive behaviour in the retail sector. In order to study the output accuracy in both models, we have carried out a validation experiment in which we compared the results from our simulation models to the performance of a real system. Our experiment was carried out using a large UK department store as a case study. We had to determine an efficient implementation of management policy in the store’s fitting room using DES and ABS. Overall, we have found that both simulation models were a good representation of the real system when modelling human reactive behaviour.
Resumo:
Dissertação de Mestrado, Ciências da Linguagem, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2014
Resumo:
Nowadays, one of the most important areas of interest in archeology is the characterization of the submersed cultural heritage. Mediterranean Sea is rich in archaeological findings due to storms, accidents and naval battles since prehistoric times. Chemical analysis of submerged materials is an extremely valuable source of information on the origin and precedence of the wrecks, and also the raw materials employed during the manufacturing of the objects found in these sites. Nevertheless, sometimes it is not possible to extract the archaeological material from the marine environment due to size of the sample, the legislation or preservation purposes. In these cases, the in-situ analysis turns into the only alternative for obtaining information. In spite of this demand, no analytical techniques are available for the in-situ chemical characterization of underwater materials. The versatility of laser-induced breakdown spectroscopy (LIBS) has been successfully tested in oceanography 1. Advantages such as rapid and in situ analysis with no sample preparation make LIBS a suitable alternative for field measurements. To further exploit the inherent advantages of the technology, a mobile fiber-based LIBS platform capable of performing remote measurements up to 50 meters range has been designed for the recognition and identification of artworks in underwater archaeological shipwrecks. The LIBS prototype featured both single-pulse (SP-LIBS) and multi-pulse excitation (MP-LIBS) 2. The use of multi-pulse excitation allowed an increased laser beam energy (up to 95 mJ) transmitted through the optical fiber. This excitation mode results in an improved performance of the equipment in terms of extended range of analysis (to a depth of 50 m) and a broader variety of samples to be analyzed (i.e., rocks, marble, ceramics and concrete). In the present work, the design and construction considerations of the instrument are reported and its performance is discussed on the basis of the spectral response, the remote irradiance achieved upon the range of analysis and its influence on plasma properties, as well as the effect of the laser pulse duration and purge gas to the LIBS signal. Also, to check the reliability and reproducibility of the instrument for field analysis several robustness tests were performed outside the lab. Finally, the capability of this instrument was successfully demonstrated in an underwater archaeological shipwreck (San Pedro de Alcántara, Malaga).
Resumo:
Altough nowadays DMTA is one of the most used techniques to characterize polymers thermo-mechanical behaviour, it is only effective for small amplitude oscillatory tests and limited to a single frequency analysis (linear regime). In this thesis work a Fourier transform based experimental system has proven to give hint on structural and chemical changes in specimens during large amplitude oscillatory tests exploiting multi frequency spectral analysis turning out in a more sensitive tool than classical linear approach. The test campaign has been focused on three test typologies: Strain sweep tests, Damage investigation and temperature sweep tests.
Resumo:
Hazardous materials are substances that, if not regulated, can pose a threat to human populations and their environmental health, safety or property when transported in commerce. About 1.5 million tons of hazardous material shipments are transported by truck in the US annually, with a steady increase of approximately 5% per year. The objective of this study was to develop a routing tool for hazardous material transport in order to facilitate reduced environmental impacts and less transportation difficulties, yet would also find paths that were still compelling for the shipping carriers as a matter of trucking cost. The study started with identification of inhalation hazard impact zones and explosion protective areas around the location of hypothetical hazardous material releases, considering different parameters (i.e., chemicals characteristics, release quantities, atmospheric condition, etc.). Results showed that depending on the quantity of release, chemical, and atmospheric stability (a function of wind speed, meteorology, sky cover, time and location of accidents, etc.) the consequence of these incidents can differ. The study was extended by selection of other evaluation criteria for further investigation because health risk as an evaluation criterion would not be the only concern in selection of routes. Transportation difficulties (i.e., road blockage and congestion) were incorporated as important factor due to their indirect impact/cost on the users of transportation networks. Trucking costs were also considered as one of the primary criteria in selection of hazardous material paths; otherwise the suggested routes would have not been convincing for the shipping companies. The last but not least criterion was proximity of public places to the routes. The approach evolved from a simple framework to a complicated and efficient GIS-based tool able to investigate transportation networks of any given study area, and capable of generating best routing options for cargos. The suggested tool uses a multi-criteria-decision-making method, which considers the priorities of the decision makers in choosing the cargo routes. Comparison of the routing options based on each criterion and also the overall suitableness of the path in regards to all the criteria (using a multi-criteria-decision-making method) showed that using similar tools as the one proposed by this study can provide decision makers insights in the area of hazardous material transport. This tool shows the probable consequences of considering each path in a very easily understandable way; in the formats of maps and tables, which makes the tradeoffs of costs and risks considerably simpler, as in some cases slightly compromising on trucking cost may drastically decrease the probable health risk and/or traffic difficulties. This will not only be rewarding to the community by making cities safer places to live, but also can be beneficial to shipping companies by allowing them to advertise as environmental friendly conveyors.
Resumo:
The accuracy of matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) for identifying Streptococcus suis isolates obtained from pigs, wild animals, and humans was evaluated using a PCR-based identification assay as the gold standard. In addition, MALDI-TOF MS was compared with the commercial multi-tests Rapid ID 32 STREP system. From the 129 S. suis isolates included in the study and identified by the molecular method, only 31 isolates (24.03%) had score values ≥2.300 and 79 isolates (61.24%) gave score values between 2.299 and 2.000. After updating the currently available S. suis MALDI Biotyper database with the spectra of three additional clinical isolates of serotypes 2, 7, and 9, most isolates had statistically significant higher score values (mean score: 2.65) than those obtained using the original database (mean score: 2.182). Considering the results of the present study, we suggest using a less restrictive threshold score of ≥2.000 for reliable species identification of S. suis. According to this cut-off value, a total of 125 S. suis isolates (96.9%) were correctly identified using the updated database. These data indicate an excellent performance of MALDI-TOF MS for the identification of S. suis.
Resumo:
The Swedish-speaking minority in Finland, often described as an ‘elite minority’, holds a special position in the country. With linguistic rights protected by the constitution of Finland, Swedish-speakers, as a minority of only 5.3%, are often described in public discourse and in academic and statistical studies as happier, healthier and more well off economically than the Finnish-speaking majority. As such, the minority is a unique example of language minorities in Europe. Knowledge derived from qualitatively grounded studies on the topic is however lacking, meaning that there is a gap in understanding of the nature and complexity of the minority. Drawing on ethnographic research conducted in four different locations in Finland over a period of 12 months, this thesis provides a theoretically grounded and empirically informed rich account of the identifications and sites of belonging of this diverse minority. The thesis makes a contribution to theoretical, methodological and empirical research on the Swedish-speaking minority, debates around identity and belonging, and ethnographic methodological approaches. Making use of novel methodology in studying Swedish-speaking Finns, this thesis moves beyond generalisations and simplifications on its nature and character. Drawing on rich ethnographic empirical material, the thesis interrogates various aspects of the lived experience of Swedish-speaking Finns by combining the concepts of belonging and identification. Some of the issues explored are the way in which belonging can be regionally specific, how Swedish-speakers create Swedish-spaces, how language use is situational and variable and acts as a marker of identity, and finally how identifications and sites of belonging among the minority are extremely varied and complex. The thesis concludes that there are various sites of belonging and identification available to Swedish-speakers, and these need to be studied and considered in order to gain an accurate picture of the lived experience of the minority. It also argues that while identifications are based on collective imagery, this imagery can vary among Swedish-speakers and identifications are multiple and situational. Finally, while language is a key commonality for the minority, the meanings attached to it are not only concerned with ‘Finland Swedishness’, but connected to various other factors, such as the context a person grew up in and the region one lives in. The complex issues affecting the lived experience of Swedish-speaking Finns cannot be understood without the contribution of findings from qualitative research. This thesis therefore points towards a new kind of understanding of Swedish-speaking Finns, moving away from stereotypes and simplifications, shifting our gaze towards a richer perception of the minority.
Resumo:
This paper describes a method to automatically obtain, from a set of impedance measurements at different frequencies, an equivalent circuit composed of lumped elements based on the vector fitting algorithm. The method starts from the impedance measurement of the circuit and then, through the recursive use of vector fitting, identifies the circuit topology and the component values of lumped elements. The method can be expanded to include other components usually used in impedance spectroscopy. The method is firstly described and then two examples highlight the robustness of the method and showcase its applicability.
Resumo:
This PhD thesis reports the main activities carried out during the 3 years long “Mechanics and advanced engineering sciences” course, at the Department of Industrial Engineering of the University of Bologna. The research project title is “Development and analysis of high efficiency combustion systems for internal combustion engines” and the main topic is knock, one of the main challenges for boosted gasoline engines. Through experimental campaigns, modelling activity and test bench validation, 4 different aspects have been addressed to tackle the issue. The main path goes towards the definition and calibration of a knock-induced damage model, to be implemented in the on-board control strategy, but also usable for the engine calibration and potentially during the engine design. Ionization current signal capabilities have been investigated to fully replace the pressure sensor, to develop a robust on-board close-loop combustion control strategy, both in knock-free and knock-limited conditions. Water injection is a powerful solution to mitigate knock intensity and exhaust temperature, improving fuel consumption; its capabilities have been modelled and validated at the test bench. Finally, an empiric model is proposed to predict the engine knock response, depending on several operating condition and control parameters, including injected water quantity.
Resumo:
The job of a historian is to understand what happened in the past, resorting in many cases to written documents as a firsthand source of information. Text, however, does not amount to the only source of knowledge. Pictorial representations, in fact, have also accompanied the main events of the historical timeline. In particular, the opportunity of visually representing circumstances has bloomed since the invention of photography, with the possibility of capturing in real-time the occurrence of a specific events. Thanks to the widespread use of digital technologies (e.g. smartphones and digital cameras), networking capabilities and consequent availability of multimedia content, the academic and industrial research communities have developed artificial intelligence (AI) paradigms with the aim of inferring, transferring and creating new layers of information from images, videos, etc. Now, while AI communities are devoting much of their attention to analyze digital images, from an historical research standpoint more interesting results may be obtained analyzing analog images representing the pre-digital era. Within the aforementioned scenario, the aim of this work is to analyze a collection of analog documentary photographs, building upon state-of-the-art deep learning techniques. In particular, the analysis carried out in this thesis aims at producing two following results: (a) produce the date of an image, and, (b) recognizing its background socio-cultural context,as defined by a group of historical-sociological researchers. Given these premises, the contribution of this work amounts to: (i) the introduction of an historical dataset including images of “Family Album” among all the twentieth century, (ii) the introduction of a new classification task regarding the identification of the socio-cultural context of an image, (iii) the exploitation of different deep learning architectures to perform the image dating and the image socio-cultural context classification.
Resumo:
A densely built environment is a complex system of infrastructure, nature, and people closely interconnected and interacting. Vehicles, public transport, weather action, and sports activities constitute a manifold set of excitation and degradation sources for civil structures. In this context, operators should consider different factors in a holistic approach for assessing the structural health state. Vibration-based structural health monitoring (SHM) has demonstrated great potential as a decision-supporting tool to schedule maintenance interventions. However, most excitation sources are considered an issue for practical SHM applications since traditional methods are typically based on strict assumptions on input stationarity. Last-generation low-cost sensors present limitations related to a modest sensitivity and high noise floor compared to traditional instrumentation. If these devices are used for SHM in urban scenarios, short vibration recordings collected during high-intensity events and vehicle passage may be the only available datasets with a sufficient signal-to-noise ratio. While researchers have spent efforts to mitigate the effects of short-term phenomena in vibration-based SHM, the ultimate goal of this thesis is to exploit them and obtain valuable information on the structural health state. First, this thesis proposes strategies and algorithms for smart sensors operating individually or in a distributed computing framework to identify damage-sensitive features based on instantaneous modal parameters and influence lines. Ordinary traffic and people activities become essential sources of excitation, while human-powered vehicles, instrumented with smartphones, take the role of roving sensors in crowdsourced monitoring strategies. The technical and computational apparatus is optimized using in-memory computing technologies. Moreover, identifying additional local features can be particularly useful to support the damage assessment of complex structures. Thereby, smart coatings are studied to enable the self-sensing properties of ordinary structural elements. In this context, a machine-learning-aided tomography method is proposed to interpret the data provided by a nanocomposite paint interrogated electrically.