872 resultados para Black box
Resumo:
Pós-graduação em Artes - IA
Resumo:
Daily rhythmic processes are coordinated by circadian clocks, which are present in numerous central and peripheral tissues. In mammals, two circadian clocks, the food-entrainable oscillator (FEO) and methamphetamine-sensitive circadian oscillator (MASCO), are "black box" mysteries because their anatomical loci are unknown and their outputs are not expressed under normal physiological conditions. In the current study, the investigation of the timekeeping mechanisms of the FEO and MASCO in mice with disruption of all three paralogs of the canonical clock gene, Period, revealed unique and convergent findings. We found that both the MASCO and FEO in Per1(-/-)/Per2(-/-)/Per3(-/-) mice are circadian oscillators with unusually short (similar to 21 h) periods. These data demonstrate that the canonical Period genes are involved in period determination in the FEO and MASCO, and computational modeling supports the hypothesis that the FEO and MASCO use the same timekeeping mechanism or are the same circadian oscillator. Finally, these studies identify Per1(-/-)/Per2(-/-)/Per3(-/-) mice as a unique tool critical to the search for the elusive anatomical location(s) of the FEO and MASCO.
Resumo:
The behavior of composed Web services depends on the results of the invoked services; unexpected behavior of one of the invoked services can threat the correct execution of an entire composition. This paper proposes an event-based approach to black-box testing of Web service compositions based on event sequence graphs, which are extended by facilities to deal not only with service behavior under regular circumstances (i.e., where cooperating services are working as expected) but also with their behavior in undesirable situations (i.e., where cooperating services are not working as expected). Furthermore, the approach can be used independently of artifacts (e.g., Business Process Execution Language) or type of composition (orchestration/choreography). A large case study, based on a commercial Web application, demonstrates the feasibility of the approach and analyzes its characteristics. Test generation and execution are supported by dedicated tools. Especially, the use of an enterprise service bus for test execution is noteworthy and differs from other approaches. The results of the case study encourage to suggest that the new approach has the power to detect faults systematically, performing properly even with complex and large compositions. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
Interactive theorem provers (ITP for short) are tools whose final aim is to certify proofs written by human beings. To reach that objective they have to fill the gap between the high level language used by humans for communicating and reasoning about mathematics and the lower level language that a machine is able to “understand” and process. The user perceives this gap in terms of missing features or inefficiencies. The developer tries to accommodate the user requests without increasing the already high complexity of these applications. We believe that satisfactory solutions can only come from a strong synergy between users and developers. We devoted most part of our PHD designing and developing the Matita interactive theorem prover. The software was born in the computer science department of the University of Bologna as the result of composing together all the technologies developed by the HELM team (to which we belong) for the MoWGLI project. The MoWGLI project aimed at giving accessibility through the web to the libraries of formalised mathematics of various interactive theorem provers, taking Coq as the main test case. The motivations for giving life to a new ITP are: • study the architecture of these tools, with the aim of understanding the source of their complexity • exploit such a knowledge to experiment new solutions that, for backward compatibility reasons, would be hard (if not impossible) to test on a widely used system like Coq. Matita is based on the Curry-Howard isomorphism, adopting the Calculus of Inductive Constructions (CIC) as its logical foundation. Proof objects are thus, at some extent, compatible with the ones produced with the Coq ITP, that is itself able to import and process the ones generated using Matita. Although the systems have a lot in common, they share no code at all, and even most of the algorithmic solutions are different. The thesis is composed of two parts where we respectively describe our experience as a user and a developer of interactive provers. In particular, the first part is based on two different formalisation experiences: • our internship in the Mathematical Components team (INRIA), that is formalising the finite group theory required to attack the Feit Thompson Theorem. To tackle this result, giving an effective classification of finite groups of odd order, the team adopts the SSReflect Coq extension, developed by Georges Gonthier for the proof of the four colours theorem. • our collaboration at the D.A.M.A. Project, whose goal is the formalisation of abstract measure theory in Matita leading to a constructive proof of Lebesgue’s Dominated Convergence Theorem. The most notable issues we faced, analysed in this part of the thesis, are the following: the difficulties arising when using “black box” automation in large formalisations; the impossibility for a user (especially a newcomer) to master the context of a library of already formalised results; the uncomfortable big step execution of proof commands historically adopted in ITPs; the difficult encoding of mathematical structures with a notion of inheritance in a type theory without subtyping like CIC. In the second part of the manuscript many of these issues will be analysed with the looking glasses of an ITP developer, describing the solutions we adopted in the implementation of Matita to solve these problems: integrated searching facilities to assist the user in handling large libraries of formalised results; a small step execution semantic for proof commands; a flexible implementation of coercive subtyping allowing multiple inheritance with shared substructures; automatic tactics, integrated with the searching facilities, that generates proof commands (and not only proof objects, usually kept hidden to the user) one of which specifically designed to be user driven.
Resumo:
This paper studies relational goods as immaterial assets creating real effects in society. The work starts answering to this question: what kind of effects do relational goods produce? After an accurate literature examination we suppose relational goods are social relations of second order. In the hypotesis they come from the emergence of two distinct social relations: interpersonal and reflexive relations. We describe empirical evidences of these emergent assets in social life and we test the effects they produce with a model. In the work we focus on four targets. First of all we describe the emergence of relational goods through a mathematical model. Then we individualize social realities where relational goods show evident effects and we outline our scientific hypotesis. The following step consists in the formulation of empirical tests. At last we explain final results. Our aim is to set apart the constitutive structure of relational goods into a checkable model coherently with the empirical evidences shown in the research. In the study we use multi-variate analysis techniques to see relational goods in a new way and we use qualitative and quantitative strategies. Relational goods are analysed both as dependent and independent variable in order to consider causative factors acting in a black-box model. Moreover we analyse effects of relational goods inside social spheres, especially in third sector and capitalistic economy. Finally we attain to effective indexes of relational goods in order to compare them with some performance indexes.
Resumo:
This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.
Resumo:
In the framework of the micro-CHP (Combined Heat and Power) energy systems and the Distributed Generation (GD) concept, an Integrated Energy System (IES) able to meet the energy and thermal requirements of specific users, using different types of fuel to feed several micro-CHP energy sources, with the integration of electric generators of renewable energy sources (RES), electrical and thermal storage systems and the control system was conceived and built. A 5 kWel Polymer Electrolyte Membrane Fuel Cell (PEMFC) has been studied. Using experimental data obtained from various measurement campaign, the electrical and CHP PEMFC system performance have been determinate. The analysis of the effect of the water management of the anodic exhaust at variable FC loads has been carried out, and the purge process programming logic was optimized, leading also to the determination of the optimal flooding times by varying the AC FC power delivered by the cell. Furthermore, the degradation mechanisms of the PEMFC system, in particular due to the flooding of the anodic side, have been assessed using an algorithm that considers the FC like a black box, and it is able to determine the amount of not-reacted H2 and, therefore, the causes which produce that. Using experimental data that cover a two-year time span, the ageing suffered by the FC system has been tested and analyzed.
Resumo:
Lo scopo del presente lavoro di tesi riguarda la caratterizzazione di un sensore ottico per la lettura di ematocrito e lo sviluppo dell’algoritmo di calibrazione del dispositivo. In altre parole, utilizzando dati ottenuti da una sessione di calibrazione opportunamente pianificata, l’algoritmo sviluppato ha lo scopo di restituire la curva di interpolazione dei dati che caratterizza il trasduttore. I passi principali del lavoro di tesi svolto sono sintetizzati nei punti seguenti: 1) Pianificazione della sessione di calibrazione necessaria per la raccolta dati e conseguente costruzione di un modello black box. Output: dato proveniente dal sensore ottico (lettura espressa in mV) Input: valore di ematocrito espresso in punti percentuali ( questa grandezza rappresenta il valore vero di volume ematico ed è stata ottenuta con un dispositivo di centrifugazione sanguigna) 2) Sviluppo dell’algoritmo L’algoritmo sviluppato e utilizzato offline ha lo scopo di restituire la curva di regressione dei dati. Macroscopicamente, il codice possiamo distinguerlo in due parti principali: 1- Acquisizione dei dati provenienti da sensore e stato di funzionamento della pompa bifasica 2- Normalizzazione dei dati ottenuti rispetto al valore di riferimento del sensore e implementazione dell’algoritmo di regressione. Lo step di normalizzazione dei dati è uno strumento statistico fondamentale per poter mettere a confronto grandezze non uniformi tra loro. Studi presenti, dimostrano inoltre un mutazione morfologica del globulo rosso in risposta a sollecitazioni meccaniche. Un ulteriore aspetto trattato nel presente lavoro, riguarda la velocità del flusso sanguigno determinato dalla pompa e come tale grandezza sia in grado di influenzare la lettura di ematocrito.
Resumo:
In questa tesi si è studiato un metodo per modellare e virtualizzare tramite algoritmi in Matlab le distorsioni armoniche di un dispositivo audio non lineare, ovvero uno “strumento” che, sollecitato da un segnale audio, lo modifichi, introducendovi delle componenti non presenti in precedenza. Il dispositivo che si è scelto per questo studio il pedale BOSS SD-1 Super OverDrive per chitarra elettrica e lo “strumento matematico” che ne fornisce il modello è lo sviluppo in serie di Volterra. Lo sviluppo in serie di Volterra viene diffusamente usato nello studio di sistemi fisici non lineari, nel caso in cui si abbia interesse a modellare un sistema che si presenti come una “black box”. Il metodo della Nonlinear Convolution progettato dall'Ing. Angelo Farina ha applicato con successo tale sviluppo anche all'ambito dell'acustica musicale: servendosi di una tecnica di misurazione facilmente realizzabile e del modello fornito dalla serie di Volterra Diagonale, il metodo permette di caratterizzare un dispositivo audio non lineare mediante le risposte all'impulso non lineari che il dispositivo fornisce a fronte di un opportuno segnale di test (denominato Exponential Sine Sweep). Le risposte all'impulso del dispositivo vengono utilizzate per ricavare i kernel di Volterra della serie. L'utilizzo di tale metodo ha permesso all'Università di Bologna di ottenere un brevetto per un software che virtualizzasse in post-processing le non linearità di un sistema audio. In questa tesi si è ripreso il lavoro che ha portato al conseguimento del brevetto, apportandovi due innovazioni: si è modificata la scelta del segnale utilizzato per testare il dispositivo (si è fatto uso del Synchronized Sine Sweep, in luogo dell'Exponential Sine Sweep); si è messo in atto un primo tentativo di orientare la virtualizzazione verso l'elaborazione in real-time, implementando un procedimento (in post-processing) di creazione dei kernel in dipendenza dal volume dato in input al dispositivo non lineare.
Resumo:
The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.
Resumo:
Globalization has increased the pressure on organizations and companies to operate in the most efficient and economic way. This tendency promotes that companies concentrate more and more on their core businesses, outsource less profitable departments and services to reduce costs. By contrast to earlier times, companies are highly specialized and have a low real net output ratio. For being able to provide the consumers with the right products, those companies have to collaborate with other suppliers and form large supply chains. An effect of large supply chains is the deficiency of high stocks and stockholding costs. This fact has lead to the rapid spread of Just-in-Time logistic concepts aimed minimizing stock by simultaneous high availability of products. Those concurring goals, minimizing stock by simultaneous high product availability, claim for high availability of the production systems in the way that an incoming order can immediately processed. Besides of design aspects and the quality of the production system, maintenance has a strong impact on production system availability. In the last decades, there has been many attempts to create maintenance models for availability optimization. Most of them concentrated on the availability aspect only without incorporating further aspects as logistics and profitability of the overall system. However, production system operator’s main intention is to optimize the profitability of the production system and not the availability of the production system. Thus, classic models, limited to represent and optimize maintenance strategies under the light of availability, fail. A novel approach, incorporating all financial impacting processes of and around a production system, is needed. The proposed model is subdivided into three parts, maintenance module, production module and connection module. This subdivision provides easy maintainability and simple extendability. Within those modules, all aspect of production process are modeled. Main part of the work lies in the extended maintenance and failure module that offers a representation of different maintenance strategies but also incorporates the effect of over-maintaining and failed maintenance (maintenance induced failures). Order release and seizing of the production system are modeled in the production part. Due to computational power limitation, it was not possible to run the simulation and the optimization with the fully developed production model. Thus, the production model was reduced to a black-box without higher degree of details.
Resumo:
Automatic design has become a common approach to evolve complex networks, such as artificial neural networks (ANNs) and random boolean networks (RBNs), and many evolutionary setups have been discussed to increase the efficiency of this process. However networks evolved in this way have few limitations that should not be overlooked. One of these limitations is the black-box problem that refers to the impossibility to analyze internal behaviour of complex networks in an efficient and meaningful way. The aim of this study is to develop a methodology that make it possible to extract finite-state automata (FSAs) descriptions of robot behaviours from the dynamics of automatically designed complex controller networks. These FSAs unlike complex networks from which they're extracted are both readable and editable thus making the resulting designs much more valuable.
Resumo:
Artificial neural networks are based on computational units that resemble basic information processing properties of biological neurons in an abstract and simplified manner. Generally, these formal neurons model an input-output behaviour as it is also often used to characterize biological neurons. The neuron is treated as a black box; spatial extension and temporal dynamics present in biological neurons are most often neglected. Even though artificial neurons are simplified, they can show a variety of input-output relations, depending on the transfer functions they apply. This unit on transfer functions provides an overview of different transfer functions and offers a simulation that visualizes the input-output behaviour of an artificial neuron depending on the specific combination of transfer functions.
Resumo:
The central aim of our project is to explore the handling of e-mail request from customers by tourist organisations and to explain the perceived behaviour. For this purpose, we designed a qualitative empirical study which consists basically of two stages. The first stage consists of a black-box test where we employ the setting of a qualitative experiment to measure the behaviour of the organisation to an e-mail request. The second stage comprises a with-box test where we want to look into the tourist organizations and analyse the relevant information processes. This study should give as some insight in the internal processing of e-mail requests and thus should help to explain the reactions that we registered.
Resumo:
Blame avoidance behaviour (BAB) has become an increasingly popular topic in political science. However, the preconditions of BAB, its presence and consequences in various areas and in different political systems largely remain a black box. In order to generate a better understanding of BAB and its importance for the workings of democratic political systems, the scattered literature on BAB needs to be assessed and structured. This article offers a comprehensive review of the literature on blame avoidance. It departs from Weaver’s concept of blame avoidance and subsequently differentiates between work on BAB in comparative welfare state research and work on BAB in public policy and administration. It is argued that between these two strands of literature a bifurcation exists since both perspectives rarely draw on each other to create a more general understanding of BAB. Advantages from existing approaches must be combined to assess the phenomenon of blame avoidance in a more comprehensive way.