39 resultados para biomechanical trade-off
Resumo:
Images of a scene, static or dynamic, are generally acquired at different epochs from different viewpoints. They potentially gather information about the whole scene and its relative motion with respect to the acquisition device. Data from different (in the spatial or temporal domain) visual sources can be fused together to provide a unique consistent representation of the whole scene, even recovering the third dimension, permitting a more complete understanding of the scene content. Moreover, the pose of the acquisition device can be achieved by estimating the relative motion parameters linking different views, thus providing localization information for automatic guidance purposes. Image registration is based on the use of pattern recognition techniques to match among corresponding parts of different views of the acquired scene. Depending on hypotheses or prior information about the sensor model, the motion model and/or the scene model, this information can be used to estimate global or local geometrical mapping functions between different images or different parts of them. These mapping functions contain relative motion parameters between the scene and the sensor(s) and can be used to integrate accordingly informations coming from the different sources to build a wider or even augmented representation of the scene. Accordingly, for their scene reconstruction and pose estimation capabilities, nowadays image registration techniques from multiple views are increasingly stirring up the interest of the scientific and industrial community. Depending on the applicative domain, accuracy, robustness, and computational payload of the algorithms represent important issues to be addressed and generally a trade-off among them has to be reached. Moreover, on-line performance is desirable in order to guarantee the direct interaction of the vision device with human actors or control systems. This thesis follows a general research approach to cope with these issues, almost independently from the scene content, under the constraint of rigid motions. This approach has been motivated by the portability to very different domains as a very desirable property to achieve. A general image registration approach suitable for on-line applications has been devised and assessed through two challenging case studies in different applicative domains. The first case study regards scene reconstruction through on-line mosaicing of optical microscopy cell images acquired with non automated equipment, while moving manually the microscope holder. By registering the images the field of view of the microscope can be widened, preserving the resolution while reconstructing the whole cell culture and permitting the microscopist to interactively explore the cell culture. In the second case study, the registration of terrestrial satellite images acquired by a camera integral with the satellite is utilized to estimate its three-dimensional orientation from visual data, for automatic guidance purposes. Critical aspects of these applications are emphasized and the choices adopted are motivated accordingly. Results are discussed in view of promising future developments.
Resumo:
Hybrid technologies, thanks to the convergence of integrated microelectronic devices and new class of microfluidic structures could open new perspectives to the way how nanoscale events are discovered, monitored and controlled. The key point of this thesis is to evaluate the impact of such an approach into applications of ion-channel High Throughput Screening (HTS)platforms. This approach offers promising opportunities for the development of new classes of sensitive, reliable and cheap sensors. There are numerous advantages of embedding microelectronic readout structures strictly coupled to sensing elements. On the one hand the signal-to-noise-ratio is increased as a result of scaling. On the other, the readout miniaturization allows organization of sensors into arrays, increasing the capability of the platform in terms of number of acquired data, as required in the HTS approach, to improve sensing accuracy and reliabiity. However, accurate interface design is required to establish efficient communication between ionic-based and electronic-based signals. The work made in this thesis will show a first example of a complete parallel readout system with single ion channel resolution, using a compact and scalable hybrid architecture suitable to be interfaced to large array of sensors, ensuring simultaneous signal recording and smart control of the signal-to-noise ratio and bandwidth trade off. More specifically, an array of microfluidic polymer structures, hosting artificial lipid bilayers blocks where single ion channel pores are embededed, is coupled with an array of ultra-low noise current amplifiers for signal amplification and data processing. As demonstrating working example, the platform was used to acquire ultra small currents derived by single non-covalent molecular binding between alpha-hemolysin pores and beta-cyclodextrin molecules in artificial lipid membranes.
Resumo:
Photovoltaic (PV) conversion is the direct production of electrical energy from sun without involving the emission of polluting substances. In order to be competitive with other energy sources, cost of the PV technology must be reduced ensuring adequate conversion efficiencies. These goals have motivated the interest of researchers in investigating advanced designs of crystalline silicon solar (c-Si) cells. Since lowering the cost of PV devices involves the reduction of the volume of semiconductor, an effective light trapping strategy aimed at increasing the photon absorption is required. Modeling of solar cells by electro-optical numerical simulation is helpful to predict the performance of future generations devices exhibiting advanced light-trapping schemes and to provide new and more specific guidelines to industry. The approaches to optical simulation commonly adopted for c-Si solar cells may lead to inaccurate results in case of thin film and nano-stuctured solar cells. On the other hand, rigorous solvers of Maxwell equations are really cpu- and memory-intensive. Recently, in optical simulation of solar cells, the RCWA method has gained relevance, providing a good trade-off between accuracy and computational resources requirement. This thesis is a contribution to the numerical simulation of advanced silicon solar cells by means of a state-of-the-art numerical 2-D/3-D device simulator, that has been successfully applied to the simulation of selective emitter and the rear point contact solar cells, for which the multi-dimensionality of the transport model is required in order to properly account for all physical competing mechanisms. In the second part of the thesis, the optical problems is discussed. Two novel and computationally efficient RCWA implementations for 2-D simulation domains as well as a third RCWA for 3-D structures based on an eigenvalues calculation approach have been presented. The proposed simulators have been validated in terms of accuracy, numerical convergence, computation time and correctness of results.
Resumo:
L’invarianza spaziale dei parametri di un modello afflussi-deflussi può rivelarsi una soluzione pratica e valida nel caso si voglia stimare la disponibilità di risorsa idrica di un’area. La simulazione idrologica è infatti uno strumento molto adottato ma presenta alcune criticità legate soprattutto alla necessità di calibrare i parametri del modello. Se si opta per l’applicazione di modelli spazialmente distribuiti, utili perché in grado di rendere conto della variabilità spaziale dei fenomeni che concorrono alla formazione di deflusso, il problema è solitamente legato all’alto numero di parametri in gioco. Assumendo che alcuni di questi siano omogenei nello spazio, dunque presentino lo stesso valore sui diversi bacini, è possibile ridurre il numero complessivo dei parametri che necessitano della calibrazione. Si verifica su base statistica questa assunzione, ricorrendo alla stima dell’incertezza parametrica valutata per mezzo di un algoritmo MCMC. Si nota che le distribuzioni dei parametri risultano in diversa misura compatibili sui bacini considerati. Quando poi l’obiettivo è la stima della disponibilità di risorsa idrica di bacini non strumentati, l’ipotesi di invarianza dei parametri assume ancora più importanza; solitamente infatti si affronta questo problema ricorrendo a lunghe analisi di regionalizzazione dei parametri. In questa sede invece si propone una procedura di cross-calibrazione che viene realizzata adottando le informazioni provenienti dai bacini strumentati più simili al sito di interesse. Si vuole raggiungere cioè un giusto compromesso tra lo svantaggio derivante dall’assumere i parametri del modello costanti sui bacini strumentati e il beneficio legato all’introduzione, passo dopo passo, di nuove e importanti informazioni derivanti dai bacini strumentati coinvolti nell’analisi. I risultati dimostrano l’utilità della metodologia proposta; si vede infatti che, in fase di validazione sul bacino considerato non strumentato, è possibile raggiungere un buona concordanza tra le serie di portata simulate e osservate.
Resumo:
This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.
Resumo:
In Sub-Saharan Africa, non-democratic events, like civil wars and coup d'etat, destroy economic development. This study investigates both domestic and spatial effects on the likelihood of civil wars and coup d'etat. To civil wars, an increase of income growth is one of common research conclusions to stop wars. This study adds a concern on ethnic fractionalization. IV-2SLS is applied to overcome causality problem. The findings document that income growth is significant to reduce number and degree of violence in high ethnic fractionalized countries, otherwise they are trade-off. Income growth reduces amount of wars, but increases its violent level, in the countries with few large ethnic groups. Promoting growth should consider ethnic composition. This study also investigates the clustering and contagion of civil wars using spatial panel data models. Onset, incidence and end of civil conflicts spread across the network of neighboring countries while peace, the end of conflicts, diffuse only with the nearest neighbor. There is an evidence of indirect links from neighboring income growth, without too much inequality, to reduce the likelihood of civil wars. To coup d'etat, this study revisits its diffusion for both all types of coups and only successful ones. The results find an existence of both domestic and spatial determinants in different periods. Domestic income growth plays major role to reduce the likelihood of coup before cold war ends, while spatial effects do negative afterward. Results on probability to succeed coup are similar. After cold war ends, international organisations seriously promote democracy with pressure against coup d'etat, and it seems to be effective. In sum, this study indicates the role of domestic ethnic fractionalization and the spread of neighboring effects to the likelihood of non-democratic events in a country. Policy implementation should concern these factors.
Resumo:
The aim of the thesis is to propose a Bayesian estimation through Markov chain Monte Carlo of multidimensional item response theory models for graded responses with complex structures and correlated traits. In particular, this work focuses on the multiunidimensional and the additive underlying latent structures, considering that the first one is widely used and represents a classical approach in multidimensional item response analysis, while the second one is able to reflect the complexity of real interactions between items and respondents. A simulation study is conducted to evaluate the parameter recovery for the proposed models under different conditions (sample size, test and subtest length, number of response categories, and correlation structure). The results show that the parameter recovery is particularly sensitive to the sample size, due to the model complexity and the high number of parameters to be estimated. For a sufficiently large sample size the parameters of the multiunidimensional and additive graded response models are well reproduced. The results are also affected by the trade-off between the number of items constituting the test and the number of item categories. An application of the proposed models on response data collected to investigate Romagna and San Marino residents' perceptions and attitudes towards the tourism industry is also presented.
Resumo:
In questo lavoro di tesi si è elaborato un quadro di riferimento per l’utilizzo combinato di due metodologie di valutazione di impatti LCA e RA, per tecnologie emergenti. L’originalità dello studio sta nell’aver proposto e anche applicato il quadro di riferimento ad un caso studio, in particolare ad una tecnologia innovativa di refrigerazione, basata su nanofluidi (NF), sviluppata da partner del progetto Europeo Nanohex che hanno collaborato all’elaborazione degli studi soprattutto per quanto riguarda l’inventario dei dati necessari. La complessità dello studio è da ritrovare tanto nella difficile integrazione di due metodologie nate per scopi differenti e strutturate per assolvere a quegli scopi, quanto nel settore di applicazione che seppur in forte espansione ha delle forti lacune di informazioni circa processi di produzione e comportamento delle sostanze. L’applicazione è stata effettuata sulla produzione di nanofluido (NF) di allumina secondo due vie produttive (single-stage e two-stage) per valutare e confrontare gli impatti per la salute umana e l’ambiente. Occorre specificare che il LCA è stato quantitativo ma non ha considerato gli impatti dei NM nelle categorie di tossicità. Per quanto concerne il RA è stato sviluppato uno studio di tipo qualitativo, a causa della problematica di carenza di parametri tossicologici e di esposizione su citata avente come focus la categoria dei lavoratori, pertanto è stata fatta l’assunzione che i rilasci in ambiente durante la fase di produzione sono trascurabili. Per il RA qualitativo è stato utilizzato un SW specifico, lo Stoffenmanger-Nano che rende possibile la prioritizzazione dei rischi associati ad inalazione in ambiente di lavoro. Il quadro di riferimento prevede una procedura articolata in quattro fasi: DEFINIZIONE SISTEMA TECNOLOGICO, RACCOLTA DATI, VALUTAZIONE DEL RISCHIO E QUANTIFICAZIONE DEGLI IMPATTI, INTERPRETAZIONE.
Resumo:
This dissertation deals with the design and the characterization of novel reconfigurable silicon-on-insulator (SOI) devices to filter and route optical signals on-chip. Design is carried out through circuit simulations based on basic circuit elements (Building Blocks, BBs) in order to prove the feasibility of an approach allowing to move the design of Photonic Integrated Circuits (PICs) toward the system level. CMOS compatibility and large integration scale make SOI one of the most promising material to realize PICs. The concepts of generic foundry and BB based circuit simulations for the design are emerging as a solution to reduce the costs and increase the circuit complexity. To validate the BB based approach, the development of some of the most important BBs is performed first. A novel tunable coupler is also presented and it is demonstrated to be a valuable alternative to the known solutions. Two novel multi-element PICs are then analysed: a narrow linewidth single mode resonator and a passband filter with widely tunable bandwidth. Extensive circuit simulations are carried out to determine their performance, taking into account fabrication tolerances. The first PIC is based on two Grating Assisted Couplers in a ring resonator (RR) configuration. It is shown that a trade-off between performance, resonance bandwidth and device footprint has to be performed. The device could be employed to realize reconfigurable add-drop de/multiplexers. Sensitivity with respect to fabrication tolerances and spurious effects is however observed. The second PIC is based on an unbalanced Mach-Zehnder interferometer loaded with two RRs. Overall good performance and robustness to fabrication tolerances and nonlinear effects have confirmed its applicability for the realization of flexible optical systems. Simulated and measured devices behaviour is shown to be in agreement thus demonstrating the viability of a BB based approach to the design of complex PICs.
Resumo:
Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).
Resumo:
1.Microfinance Industry – Context of Analysis. This paper is an introduction to the microfinance industry. It serves as a context of analysis, for the empirical settings and basis for building the theoretical argument for the thesis. 2.Women in Microfinance Institutions: The Road to Poverty Reduction and Gender Equality? One of the unique aspects of microfinance institutions is their focus on outreach, i.e. their ability to reach the poor. This paper explores whether the presence of women in microfinance institutions is associated with improved outreach. Building on prior research that shows that women tend to improve financial performance and social responsibility, we examine an original dataset of 226 microfinance institutions. The empirical results suggest that the presence of a female CEO, female managers and female loan officers is directly related to improved outreach, while the presence of women board members is not. 3. Women in Microfinance Institutions: Is There a Trade-Off Between Outreach and Sustainability? Abstract This paper’s contribution to the understanding of microfinance is two-fold. First, while it has been shown that female CEOs in MFIs increase financial performance, it will be argued that female managers, female loan officers and female board members will do the same. Secondly, having previously shown that having a female presence in management in MFIs improves social performance the outreach, it will be argued that having females in the MFIs’ management will not lead to a trade-off between outreach and sustainability. These findings are based on an original data set of 226 MFIs. Statistical analysis demonstrates that a weak relationship between female managers and female loan officers vis-à-vis financial performance, but female board members do not. The trade-off between outreach and sustainability can be avoided with the appointment of females to the MFIs’ management positions, but the same cannot be concluded for female board members.
Resumo:
In 3D human movement analysis performed using stereophotogrammetric systems and skin markers, bone pose can only be estimated in an indirect fashion. During a movement, soft tissue deformations make the markers move with respect to the underlying bone generating soft tissue artefact (STA). STA has devastating effects on bone pose estimation and its compensation remains an open question. The aim of this PhD thesis was to contribute to the solution of this crucial issue. Modelling STA using measurable trial-specific variables is a fundamental prerequisite for its removal from marker trajectories. Two STA model architectures are proposed. Initially, a thigh marker-level artefact model is presented. STA was modelled as a linear combination of joint angles involved in the movement. This model was calibrated using ex-vivo and in-vivo STA invasive measures. The considerable number of model parameters led to defining STA approximations. Three definitions were proposed to represent STA as a series of modes: individual marker displacements, marker-cluster geometrical transformations (MCGT), and skin envelope shape variations. Modes were selected using two criteria: one based on modal energy and another on the selection of modes chosen a priori. The MCGT allows to select either rigid or non-rigid STA components. It was also empirically demonstrated that only the rigid component affects joint kinematics, regardless of the non-rigid amplitude. Therefore, a model of thigh and shank STA rigid component at cluster-level was then defined. An acceptable trade-off between STA compensation effectiveness and number of parameters can be obtained, improving joint kinematics accuracy. The obtained results lead to two main potential applications: the proposed models can generate realistic STAs for simulation purposes to compare different skeletal kinematics estimators; and, more importantly, focusing only on the STA rigid component, the model attains a satisfactory STA reconstruction with less parameters, facilitating its incorporation in an pose estimator.
Resumo:
In the most recent years, Additive Manufacturing (AM) has drawn the attention of both academic research and industry, as it might deeply change and improve several industrial sectors. From the material point of view, AM results in a peculiar microstructure that strictly depends on the conditions of the additive process and directly affects mechanical properties. The present PhD research project aimed at investigating the process-microstructure-properties relationship of additively manufactured metal components. Two technologies belonging to the AM family were considered: Laser-based Powder Bed Fusion (LPBF) and Wire-and-Arc Additive Manufacturing (WAAM). The experimental activity was carried out on different metals of industrial interest: a CoCrMo biomedical alloy and an AlSi7Mg0.6 alloy processed by LPBF, an AlMg4.5Mn alloy and an AISI 304L austenitic stainless steel processed by WAAM. In case of LPBF, great attention was paid to the influence that feedstock material and process parameters exert on hardness, morphological and microstructural features of the produced samples. The analyses, targeted at minimizing microstructural defects, lead to process optimization. For heat-treatable LPBF alloys, innovative post-process heat treatments, tailored on the peculiar hierarchical microstructure induced by LPBF, were developed and deeply investigated. Main mechanical properties of as-built and heat-treated alloys were assessed and they were well-correlated to the specific LPBF microstructure. Results showed that, if properly optimized, samples exhibit a good trade-off between strength and ductility yet in the as-built condition. However, tailored heat treatments succeeded in improving the overall performance of the LPBF alloys. Characterization of WAAM alloys, instead, evidenced the microstructural and mechanical anisotropy typical of AM metals. Experiments revealed also an outstanding anisotropy in the elastic modulus of the austenitic stainless-steel that, along with other mechanical properties, was explained on the basis of microstructural analyses.
Resumo:
This thesis consists of three essays on information economics. I explore how information is strategically communicated or designed by senders who aim to influence the decisions of a receiver. In the first chapter, I study a cheap talk game between two imperfectly informed experts and a decision maker. The experts receive noisy signals about the state and sequentially communicate the relevant information to the decision maker. I refine the self-serving belief system under uncertainty and Ι characterise the most informative equilibrium that might arise in such environments.In the second chapter, I consider the case where a decision maker seeks advice from a biased expert who cares also about establishing a reputation of being competent. The expert has the incentives to misreport her information but she faces a trade-off between the gain from misrepresentation and the potential reputation loss. I show that the equilibrium is fully-revealing if the expert is not too biased and not too highly reputable. If there is competition between two experts the information transmission is always improved. However, in cases where the experts are more than two the result is ambiguous, and it depends on the players’ prior belief over states.In the last chapter, I consider a model of strategic communication where a privately and imperfectly informed sender can persuade a receiver. The sender may receive favorable or unfavorable private information about her preferred state. I describe two ways that are adopted in real life situations and theoretically improve equilibrium informativeness given sender's private information. First, a policy that suggests symmetry constraints to the experiments' choice. Second, an approval strategy characterised by a low precision threshold where the receiver will accept the sender with a positive probability and a higher one where the sender will be accepted with certainty.
Resumo:
This thesis is a combination of research questions in development economics and economics of culture, with an emphasis on the role of ancestry, gender and language policies in shaping inequality of opportunities and socio-economic outcomes across different segments of a society. The first chapter shows both theoretically and empirically that heterogeneity in risk attitudes can be traced to the ethnic origins and ancestral way of living. In particular, I construct a measure of historical nomadism at the ethnicity level and link it to contemporary individual-level data on various proxies of risk attitudes. I exploit exogenous variation in biodiversity to build a novel instrument for nomadism: distance to domestication points. I find that descendants of ethnic groups that historically practiced nomadism (i) are more willing to take risks, (ii) value security less, and (iii) have riskier health behavior. The second chapter evaluates the nature of a trade-off between the advantages of female labor participation and the positive effects of female education. This work exploits a triple difference identification strategy relying on exogenous spike in cotton price and spatial variation in suitability for cotton, and split sample analyses based on the exogenous allocation of land contracts. Results show that gender differences in parental investments in patriarchal societies can be reinforced by the type of agricultural activity, while positive economic shocks may further exacerbate this bias, additionally crowding out higher possibilities to invest in female education. The third chapter brings novel evidence of the role of the language policy in building national sentiments, affecting educational and occupational choices. Here I focus on the case of Uzbekistan and estimate the effects of exposure to the Latin alphabet on informational literacy, education and career choices. I show that alphabet change affects people's informational literacy and the formation of certain educational and labour market trends.