942 resultados para Modeling Non-Verbal Behaviors
Resumo:
Structural Health Monitoring (SHM) is an emerging area of research associated to improvement of maintainability and the safety of aerospace, civil and mechanical infrastructures by means of monitoring and damage detection. Guided wave structural testing method is an approach for health monitoring of plate-like structures using smart material piezoelectric transducers. Among many kinds of transducers, the ones that have beam steering feature can perform more accurate surface interrogation. A frequency steerable acoustic transducer (FSATs) is capable of beam steering by varying the input frequency and consequently can detect and localize damage in structures. Guided wave inspection is typically performed through phased arrays which feature a large number of piezoelectric transducers, complexity and limitations. To overcome the weight penalty, the complex circuity and maintenance concern associated with wiring a large number of transducers, new FSATs are proposed that present inherent directional capabilities when generating and sensing elastic waves. The first generation of Spiral FSAT has two main limitations. First, waves are excited or sensed in one direction and in the opposite one (180 ̊ ambiguity) and second, just a relatively rude approximation of the desired directivity has been attained. Second generation of Spiral FSAT is proposed to overcome the first generation limitations. The importance of simulation tools becomes higher when a new idea is proposed and starts to be developed. The shaped transducer concept, especially the second generation of spiral FSAT is a novel idea in guided waves based of Structural Health Monitoring systems, hence finding a simulation tool is a necessity to develop various design aspects of this innovative transducer. In this work, the numerical simulation of the 1st and 2nd generations of Spiral FSAT has been conducted to prove the directional capability of excited guided waves through a plate-like structure.
Resumo:
One of the biggest challenges that contaminant hydrogeology is facing, is how to adequately address the uncertainty associated with model predictions. Uncertainty arise from multiple sources, such as: interpretative error, calibration accuracy, parameter sensitivity and variability. This critical issue needs to be properly addressed in order to support environmental decision-making processes. In this study, we perform Global Sensitivity Analysis (GSA) on a contaminant transport model for the assessment of hydrocarbon concentration in groundwater. We provide a quantification of the environmental impact and, given the incomplete knowledge of hydrogeological parameters, we evaluate which are the most influential, requiring greater accuracy in the calibration process. Parameters are treated as random variables and a variance-based GSA is performed in a optimized numerical Monte Carlo framework. The Sobol indices are adopted as sensitivity measures and they are computed by employing meta-models to characterize the migration process, while reducing the computational cost of the analysis. The proposed methodology allows us to: extend the number of Monte Carlo iterations, identify the influence of uncertain parameters and lead to considerable saving computational time obtaining an acceptable accuracy.
Resumo:
Nella seguente tesi è descritto il principio di sviluppo di una macchina industriale di alimentazione. Il suddetto sistema dovrà essere installato fra due macchine industriali. L’apparato dovrà mettere al passo e sincronizzare con la macchina a valle i prodotti che arriveranno in input. La macchina ordina gli oggetti usando una serie di nastri trasportatori a velocità regolabile. Lo sviluppo è stato effettuato al Laboratorio Liam dopo la richiesta dell’azienda Sitma. Sitma produceva già un tipo di sistema come quello descritto in questa tesi. Il deisderio di Sitma è quindi quello di modernizzare la precedente applicazione poiché il dispositivo che le permetteva di effettuare la messa al passo di prodotti era un PLC Siemens che non è più commercializzato. La tesi verterà sullo studio dell’applicazione e la modellazione tramite Matlab-Simulink per poi proseguire ad una applicazione, seppure non risolutiva, in TwinCAT 3.
Resumo:
La modélisation de la cryolite, utilisée dans la fabrication de l’aluminium, implique plusieurs défis, notament la présence de discontinuités dans la solution et l’inclusion de la difference de densité entre les phases solide et liquide. Pour surmonter ces défis, plusieurs éléments novateurs ont été développés dans cette thèse. En premier lieu, le problème du changement de phase, communément appelé problème de Stefan, a été résolu en deux dimensions en utilisant la méthode des éléments finis étendue. Une formulation utilisant un multiplicateur de Lagrange stable spécialement développée et une interpolation enrichie a été utilisée pour imposer la température de fusion à l’interface. La vitesse de l’interface est déterminée par le saut dans le flux de chaleur à travers l’interface et a été calculée en utilisant la solution du multiplicateur de Lagrange. En second lieu, les effets convectifs ont été inclus par la résolution des équations de Stokes dans la phase liquide en utilisant la méthode des éléments finis étendue aussi. Troisièmement, le changement de densité entre les phases solide et liquide, généralement négligé dans la littérature, a été pris en compte par l’ajout d’une condition aux limites de vitesse non nulle à l’interface solide-liquide pour respecter la conservation de la masse dans le système. Des problèmes analytiques et numériques ont été résolus pour valider les divers composants du modèle et le système d’équations couplés. Les solutions aux problèmes numériques ont été comparées aux solutions obtenues avec l’algorithme de déplacement de maillage de Comsol. Ces comparaisons démontrent que le modèle par éléments finis étendue reproduit correctement le problème de changement phase avec densités variables.
Resumo:
Dans le contexte où les routes non revêtues sont susceptibles de subir des charges importantes, une méthode rigoureuse pour la conception de ces chaussées basée sur des principes mécanistes-empiriques et sur le comportement mécanique des sols support est souhaitable. La conception mécaniste combinée à des lois d’endommagement permet l’optimisation des structures de chaussées non revêtues ainsi que la réduction des coûts de construction et d’entretien. Le but de ce projet est donc la mise au point d’une méthode de conception mécaniste-empirique adaptée aux chaussées non revêtues. Il a été question tout d’abord de mettre au point un code de calcul pour la détermination des contraintes et des déformations dans la chaussée. Ensuite, des lois d’endommagement empiriques pour les chaussées non revêtues ont été développées. Enfin, les méthodes de calcul ont permis la création d’abaques de conception. Le développement du code de calcul a consisté en une modélisation de la chaussée par un système élastique multi-couches. La modélisation a été faite en utilisant la transformation d’Odemark et les équations de Boussinesq pour le calcul des déformations sous la charge. L’élaboration des fonctions de transfert empiriques adaptées aux chaussées non revêtues a également été effectuée. Le développement des fonctions de transfert s’est fait en deux étapes. Tout d’abord, l’établissement de valeurs seuil d’orniérage considérant des niveaux jugés raisonnables de conditions fonctionnelle et structurale de la chaussée. Ensuite, le développement de critères de déformation admissible en associant les déformations théoriques calculées à l’aide du code de calcul à l’endommagement observé sur plusieurs routes en service. Les essais ont eu lieu sur des chaussées typiques reconstituées en laboratoire et soumises à un chargement répété par simulateur de charge. Les chaussées ont été instrumentées pour mesurer la déformation au sommet du sol d’infrastructure et les taux d’endommagements ont été mesurés au cours des essais.
Resumo:
bACKGROUND - The Dande Health and Demographic Surveillance System (HDSS) located in Bengo province, Angola, covers nearly 65,500 residents living in approximately 19,800 households. This study aims to describe the main causes of deaths (CoD) occurred within the HDSS, from 2009 to 2012, and to explore associations between demographic or socioeconomic factors and broad mortality groups (Group I-Communicable diseases, maternal, perinatal and nutritional conditions; Group II-Non-communicable diseases; Group III-Injuries; IND-Indeterminate). Methods - Verbal Autopsies (VA) were performed after death identification during routine HDSS visits. Associations between broad groups of CoD and sex, age, education, socioeconomic position, place of residence and place of death, were explored using chi-square tests and fitting logistic regression models. Results - From a total of 1488 deaths registered, 1009 verbal autopsies were performed and 798 of these were assigned a CoD based on the 10th revision of the International Classification of Diseases (ICD-10). Mortality was led by CD (61.0%), followed by IND (18.3%), NCD (11.6%) and INJ (9.1%). Intestinal infectious diseases, malnutrition and acute respiratory infections were the main contributors to under-five mortality (44.2%). Malaria was the most common CoD among children under 15 years old (38.6%). Tuberculosis, traffic accidents and malaria led the CoD among adults aged 15–49 (13.5%, 10.5 % and 8.0% respectively). Among adults aged 50 or more, diseases of the circulatory system (23.2%) were the major CoD, followed by tuberculosis (8.2%) and malaria (7.7%). CD were more frequent CoD among less educated people (adjusted odds ratio, 95% confidence interval for none vs. 5 or more years of school: 1.68, 1.04–2.72). Conclusion - Infectious diseases were the leading CoD in this region. Verbal autopsies proved useful to identify the main CoD, being an important tool in settings where vital statistics are scarce and death registration systems have limitations.
Resumo:
The blast furnace is the main ironmaking production unit in the world which converts iron ore with coke and hot blast into liquid iron, hot metal, which is used for steelmaking. The furnace acts as a counter-current reactor charged with layers of raw material of very different gas permeability. The arrangement of these layers, or burden distribution, is the most important factor influencing the gas flow conditions inside the furnace, which dictate the efficiency of the heat transfer and reduction processes. For proper control the furnace operators should know the overall conditions in the furnace and be able to predict how control actions affect the state of the furnace. However, due to high temperatures and pressure, hostile atmosphere and mechanical wear it is very difficult to measure internal variables. Instead, the operators have to rely extensively on measurements obtained at the boundaries of the furnace and make their decisions on the basis of heuristic rules and results from mathematical models. It is particularly difficult to understand the distribution of the burden materials because of the complex behavior of the particulate materials during charging. The aim of this doctoral thesis is to clarify some aspects of burden distribution and to develop tools that can aid the decision-making process in the control of the burden and gas distribution in the blast furnace. A relatively simple mathematical model was created for simulation of the distribution of the burden material with a bell-less top charging system. The model developed is fast and it can therefore be used by the operators to gain understanding of the formation of layers for different charging programs. The results were verified by findings from charging experiments using a small-scale charging rig at the laboratory. A basic gas flow model was developed which utilized the results of the burden distribution model to estimate the gas permeability of the upper part of the blast furnace. This combined formulation for gas and burden distribution made it possible to implement a search for the best combination of charging parameters to achieve a target gas temperature distribution. As this mathematical task is discontinuous and non-differentiable, a genetic algorithm was applied to solve the optimization problem. It was demonstrated that the method was able to evolve optimal charging programs that fulfilled the target conditions. Even though the burden distribution model provides information about the layer structure, it neglects some effects which influence the results, such as mixed layer formation and coke collapse. A more accurate numerical method for studying particle mechanics, the Discrete Element Method (DEM), was used to study some aspects of the charging process more closely. Model charging programs were simulated using DEM and compared with the results from small-scale experiments. The mixed layer was defined and the voidage of mixed layers was estimated. The mixed layer was found to have about 12% less voidage than layers of the individual burden components. Finally, a model for predicting the extent of coke collapse when heavier pellets are charged over a layer of lighter coke particles was formulated based on slope stability theory, and was used to update the coke layer distribution after charging in the mathematical model. In designing this revision, results from DEM simulations and charging experiments for some charging programs were used. The findings from the coke collapse analysis can be used to design charging programs with more stable coke layers.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
Background: Adolescent suicidal behaviors are a public health priority. Objectives: Suicidal behavior is an understudied field in the Azores, and the few existing research studies with Portuguese adolescents only include young people from Mainland Portugal. This study aims at analyzing the adolescent student population from this island region so as to describe the current situation and plan community intervention projects in this area to meet the identified needs. Methodology: This is a non-experimental, quantitative and descriptive-correlational study with the purpose of describing phenomena and finding associations between variables. Results: The results showed that 17.9% of the 484 sampled adolescents reported self-harm behaviors, with 12.7% reporting self-cutting and 5.2% medication overdose or ingestion of toxic substances. Around 15.5% of the adolescents reported suicidal ideation. Additionally, they showed high levels of depressive symptoms (19.9%), ranging from moderate (12%) to severe (7.9%). Conclusion: Adolescents had more self-harm behaviors, more severe depressive symptoms, a lower self-concept and fewer coping strategies than similar populations in mainland Portugal.
Resumo:
Three projects in my dissertation focus on the termination of internal conflicts based on three critical factors: a combatant’s bargaining strategy, perceptions of relative capabilities, and reputation for toughness. My dissertation aims to provide the relevant theoretical framework to understand war termination beyond the simple two-party bargaining context. The first project focuses on the government’s strategic use of peace agreements. The first project suggests that peace can also be designed strategically to create a better bargain in the near future by changing the current power balance, and thus the timing and nature of peace is not solely a function of overcoming current barriers to successful bargaining. As long as the government has no overwhelming capability to defeat all rebel groups simultaneously, it needs to keep multiple rebel groups as divided as possible. This strategic partial peace helps to deter multiple rebel groups from collaborating in the battlefield and increases the chances of victory against non-signatories. The second project deals with combatants’ perceptions of relative capabilities. While bargaining theories of war suggest that war ends when combatants share a similar perception about their relative capabilities, combatants’ perceptions about relative capabilities are not often homogeneous. While focusing on information problems, this paper examines when a rebel group underestimates the government’s supremacy in relative capabilities and how this heterogeneous perception about the power gap influences negotiated settlements. The third project deals with the tension between different types of reputations in the context of civil wars: 1) a reputation for resolve and 2) a reputation for keeping human rights standards. In the context of civil wars, the use of indiscriminate violence by the government is costly, and as such, it signals the government’s toughness (or resolve) to rebel groups. I argue that the rebels are more likely to accept the government’s offer when the government recently engaged in indiscriminate violence against civilians during the conflict. This effect, however, is conditional on the government’s international human rights reputation; suggesting that rebel groups interpret this violence as a signal particularly when the government does not have a penchant for attacking civilians in general.
Resumo:
The protein lysate array is an emerging technology for quantifying the protein concentration ratios in multiple biological samples. It is gaining popularity, and has the potential to answer questions about post-translational modifications and protein pathway relationships. Statistical inference for a parametric quantification procedure has been inadequately addressed in the literature, mainly due to two challenges: the increasing dimension of the parameter space and the need to account for dependence in the data. Each chapter of this thesis addresses one of these issues. In Chapter 1, an introduction to the protein lysate array quantification is presented, followed by the motivations and goals for this thesis work. In Chapter 2, we develop a multi-step procedure for the Sigmoidal models, ensuring consistent estimation of the concentration level with full asymptotic efficiency. The results obtained in this chapter justify inferential procedures based on large-sample approximations. Simulation studies and real data analysis are used to illustrate the performance of the proposed method in finite-samples. The multi-step procedure is simpler in both theory and computation than the single-step least squares method that has been used in current practice. In Chapter 3, we introduce a new model to account for the dependence structure of the errors by a nonlinear mixed effects model. We consider a method to approximate the maximum likelihood estimator of all the parameters. Using the simulation studies on various error structures, we show that for data with non-i.i.d. errors the proposed method leads to more accurate estimates and better confidence intervals than the existing single-step least squares method.
Resumo:
Non-finite clauses are sentential constituents with a verbal head that lacks a morphological specification for tense and agreement. In this paper I contend that these clauses are defective not only morphologically but also syntactically, in the sense that they all lack some of the functional categories that make up a full sentence. In particular I argue that to-infinitive clauses, gerund(ive) clauses and participial clauses differ among themselves, and with respect to other subordinate clauses, in the degree of structural defectiveness they display, which goes from the almost complete functional structure of the infinitive to the maximal degree of syntactic truncation of participial clauses (analyzed here as verbal small clauses). I also show the significant parallelism that exists in this respect between English and Spanish non-finite clauses, pointing to the implication this may have for a cross-linguistic approach to the cartography of syntactic structures.
Resumo:
The social landscape is filled with an intricate web of species-specific desired objects and course of actions. Humans are highly social animals and, as they navigate this landscape, they need to produce adapted decision-making behaviour. Traditionally social and non-social neural mechanisms affecting choice have been investigated using different approaches. Recently, in an effort to unite these findings, two main theories have been proposed to explain how the brain might encode social and non-social motivational decision-making: the extended common currency and the social valuation specific schema (Ruff & Fehr 2014). One way to test these theories is to directly compare neural activity related to social and non-social decision outcomes within the same experimental setting. Here we address this issue by focusing on the neural substrates of social and non-social forms of uncertainty. Using functional magnetic resonance imaging (fMRI) we directly compared the neural representations of reward and risk prediction and errors (RePE and RiPE) in social and non- social situations using gambling games. We used a trust betting game to vary uncertainty along a social dimension (trustworthiness), and a card game (Preuschoff et al. 2006) to vary uncertainty along a non-social dimension (pure risk). The trust game was designed to maintain the same structure of the card game. In a first study, we exposed a divide between subcortical and cortical regions when comparing the way these regions process social and non-social forms of uncertainty during outcome anticipation. Activity in subcortical regions reflected social and non-social RePE, while activity in cortical regions correlated with social RePE and non-social RiPE. The second study focused on outcome delivery and integrated the concept of RiPE in non-social settings with that of fairness and monetary utility maximisation in social settings. In particular these results corroborate recent models of anterior insula function (Singer et al. 2009; Seth 2013), and expose a possible neural mechanism that weights fairness and uncertainty but not monetary utility. The third study focused on functionally defined regions of the early visual cortex (V1) showing how activity in these areas, traditionally considered only visual, might reflect motivational prediction errors in addition to known perceptual prediction mechanisms (den Ouden et al 2012). On the whole, while our results do not support unilaterally one or the other theory modeling the underlying neural dynamics of social and non-social forms of decision making, they provide a working framework where both general mechanisms might coexist.
Resumo:
As rural communities experience rapid economic, demographic, and political change, program interventions that focus on the development of community leadership capacity could be valuable. Community leadership development programs have been deployed in rural U.S. communities for the past 30 years by university extension units, chambers of commerce, and other nonprofit foundations. Prior research on program outcomes has largely focused on trainees’ self-reported change in individual leadership knowledge, skills, and attitudes. However, postindustrial leadership theories suggest that leadership in the community relies not on individuals but on social relationships that develop across groups akin to social bridging. The purpose of this study is to extend and strengthen prior evaluative research on community leadership development programs by examining program effects on opportunities to develop bridging social capital using more rigorous methods. Data from a quasi-experimental study of rural community leaders (n = 768) in six states are used to isolate unique program effects on individual changes in both cognitive and behavioral community leadership outcomes. Regression modeling shows that participation in community leadership development programs is associated with increased leadership development in knowledge, skills, attitudes, and behaviors that are a catalyst for social bridging. The community capitals framework is used to show that program participants are significantly more likely to broaden their span of involvement across community capital asset areas over time compared to non-participants. Data on specific program structure elements show that skills training may be important for cognitive outcomes while community development learning and group projects are important for changes in organizational behavior. Suggestions for community leadership program practitioners are presented.