968 resultados para Implicit calibration
In Situ Characterization of Optical Absorption by Carbonaceous Aerosols: Calibration and Measurement
Resumo:
Light absorption by aerosols has a great impact on climate change. A Photoacoustic spectrometer (PA) coupled with aerosol-based classification techniques represents an in situ method that can quantify the light absorption by aerosols in a real time, yet significant differences have been reported using this method versus filter based methods or the so-called difference method based upon light extinction and light scattering measurements. This dissertation focuses on developing calibration techniques for instruments used in measuring the light absorption cross section, including both particle diameter measurements by the differential mobility analyzer (DMA) and light absorption measurements by PA. Appropriate reference materials were explored for the calibration/validation of both measurements. The light absorption of carbonaceous aerosols was also investigated to provide fundamental understanding to the absorption mechanism. The first topic of interest in this dissertation is the development of calibration nanoparticles. In this study, bionanoparticles were confirmed to be a promising reference material for particle diameter as well as ion-mobility. Experimentally, bionanoparticles demonstrated outstanding homogeneity in mobility compared to currently used calibration particles. A numerical method was developed to calculate the true distribution and to explain the broadening of measured distribution. The high stability of bionanoparticles was also confirmed. For PA measurement, three aerosol with spherical or near spherical shapes were investigated as possible candidates for a reference standard: C60, copper and silver. Comparisons were made between experimental photoacoustic absorption data with Mie theory calculations. This resulted in the identification of C60 particles with a mobility diameter of 150 nm to 400 nm as an absorbing standard at wavelengths of 405 nm and 660 nm. Copper particles with a mobility diameter of 80 nm to 300 nm are also shown to be a promising reference candidate at wavelength of 405 nm. The second topic of this dissertation focuses on the investigation of light absorption by carbonaceous particles using PA. Optical absorption spectra of size and mass selected laboratory generated aerosols consisting of black carbon (BC), BC with non-absorbing coating (ammonium sulfate and sodium chloride) and BC with a weakly absorbing coating (brown carbon derived from humic acid) were measured across the visible to near-IR (500 nm to 840 nm). The manner in which BC mixed with each coating material was investigated. The absorption enhancement of BC was determined to be wavelength dependent. Optical absorption spectra were also taken for size and mass selected smoldering smoke produced from six types of commonly seen wood in a laboratory scale apparatus.
Resumo:
The goal of this project is to learn the necessary steps to create a finite element model, which can accurately predict the dynamic response of a Kohler Engines Heavy Duty Air Cleaner (HDAC). This air cleaner is composed of three glass reinforced plastic components and two air filters. Several uncertainties arose in the finite element (FE) model due to the HDAC’s component material properties and assembly conditions. To help understand and mitigate these uncertainties, analytical and experimental modal models were created concurrently to perform a model correlation and calibration. Over the course of the project simple and practical methods were found for future FE model creation. Similarly, an experimental method for the optimal acquisition of experimental modal data was arrived upon. After the model correlation and calibration was performed a validation experiment was used to confirm the FE models predictive capabilities.
Resumo:
Detection canines represent the fastest and most versatile means of illicit material detection. This research endeavor in its most simplistic form is the improvement of detection canines through training, training aids, and calibration. This study focuses on developing a universal calibration compound for which all detection canines, regardless of detection substance, can be tested daily to ensure that they are working with acceptable parameters. Surrogate continuation aids (SCAs) were developed for peroxide based explosives along with the validation of the SCAs already developed within the International Forensic Research Institute (IFRI) prototype surrogate explosives kit. Storage parameters of the SCAs were evaluated to give recommendations to the detection canine community on the best possible training aid storage solution that minimizes the likelihood of contamination. Two commonly used and accepted detection canine imprinting methods were also evaluated for the speed in which the canine is trained and their reliability. As a result of the completion of this study, SCAs have been developed for explosive detection canine use covering: peroxide based explosives, TNT based explosives, nitroglycerin based explosives, tagged explosives, plasticized explosives, and smokeless powders. Through the use of these surrogate continuation aids a more uniform and reliable system of training can be implemented in the field than is currently used today. By examining the storage parameters of the SCAs, an ideal storage system has been developed using three levels of containment for the reduction of possible contamination. The developed calibration compound will ease the growing concerns over the legality and reliability of detection canine use by detailing the daily working parameters of the canine, allowing for Daubert rules of evidence admissibility to be applied. Through canine field testing, it has been shown that the IFRI SCAs outperform other commercially available training aids on the market. Additionally, of the imprinting methods tested, no difference was found in the speed in which the canines are trained or their reliability to detect illicit materials. Therefore, if the recommendations discovered in this study are followed, the detection canine community will greatly benefit through the use of scientifically validated training techniques and training aids.
Resumo:
After a crime has occurred, one of the most pressing objectives for investigators is to identify and interview any eyewitness that can provide information about the crime. Depending on his or her training, the investigative interviewer will use (to varying degrees) mostly yes/no questions, some cued and multiple-choice questions, with few open-ended questions. When the witness cannot generate any more details about the crime, one assumes the eyewitness’ memory for the critical event has been exhausted. However, given what we know about memory, is this a safe assumption? In line with the extant literature on human cognition, if one assumes (a) an eyewitness has more available memories of the crime than he or she has accessible and (b) only explicit probes have been used to elicit information, then one can argue this eyewitness may still be able to provide additional information via implicit memory tests. In accordance with these notions, the present study had two goals: demonstrate that (1) eyewitnesses can reveal memory implicitly for a detail-rich event and (2) particularly for brief crimes, eyewitnesses can reveal memory for event details implicitly that were inaccessible when probed for explicitly. Undergraduates (N = 227) participated in a psychological experiment in exchange for research credit. Participants were presented with one of three stimulus videos (brief crime vs. long crime vs. irrelevant video). Then, participants either completed a series of implicit memory tasks or worked on a puzzle for 5 minutes. Lastly, participants were interviewed explicitly about the previous video via free recall and recognition tasks. Findings indicated that participants who viewed the brief crime provided significantly more crime-related details implicitly than those who viewed the long crime. The data also showed participants who viewed the long crime provided marginally more accurate details during free recall than participants who viewed the brief crime. Furthermore, participants who completed the implicit memory tasks provided significantly less accurate information during the explicit interview than participants who were not given implicit memory tasks. This study was the first to investigate implicit memory for eyewitnesses of a crime. To determine its applied value, additional empirical work is required.
Resumo:
We survey articles covering how hedge fund returns are explained, using largely non-linear multifactor models that examine the non-linear pay-offs and exposures of hedge funds. We provide an integrated view of the implicit factor and statistical factor models that are largely able to explain the hedge fund return-generating process. We present their evolution through time by discussing pioneering studies that made a significant contribution to knowledge, and also recent innovative studies that examine hedge fund exposures using advanced econometric methods. This is the first review that analyzes very recent studies that explain a large part of hedge fund variation. We conclude by presenting some gaps for future research.
Resumo:
This thesis is focused on improving the calibration accuracy of sub-millimeter astronomical observations. The wavelength range covered by observational radio astronomy has been extended to sub-millimeter and far infrared with the advancement of receiver technology in recent years. Sub-millimeter observations carried out with airborne and ground-based telescopes typically suffer from 10% to 90% attenuation of the astronomical source signals by the terrestrial atmosphere. The amount of attenuation can be derived from the measured brightness of the atmospheric emission. In order to do this, the knowledge of the atmospheric temperature and chemical composition, as well as the frequency-dependent optical depth at each place along the line of sight is required. The altitude-dependent air temperature and composition are estimated using a parametrized static atmospheric model, which is described in Chapter 2, because direct measurements are technically and financially infeasible. The frequency dependent optical depth of the atmosphere is computed with a radiative transfer model based on the theories of quantum mechanics and, in addition, some empirical formulae. The choice, application, and improvement of third party radiative transfer models are discussed in Chapter 3. The application of the calibration procedure, which is described in Chapter 4, to the astronomical data observed with the SubMillimeter Array Receiver for Two Frequencies (SMART), and the German REceiver for Astronomy at Terahertz Frequencies (GREAT), is presented in Chapters 5 and 6. The brightnesses of atmospheric emission were fitted consistently to the simultaneous multi-band observation data from GREAT at 1.2 ∼ 1.4 and 1.8 ∼ 1.9 THz with a single set of parameters of the static atmospheric model. On the other hand, the cause of the inconsistency between the model parameters fitted from the 490 and 810 GHz data of SMART is found to be the lack of calibration of the effective cold load temperature. Besides the correctness of atmospheric modeling, the stability of the receiver is also important to achieving optimal calibration accuracy. The stabilities of SMART and GREAT are analyzed with a special calibration procedure, namely the “load calibration". The effects of the drift and fluctuation of the receiver gain and noise temperature on calibration accuracy are discussed in Chapters 5 and 6. Alternative observing strategies are proposed to combat receiver instability. The methods and conclusions presented in this thesis are applicable to the atmospheric calibration of sub-millimeter astronomical observations up to at least 4.7 THz (the H channel frequency of GREAT) for observations carried out from ∼ 4 to 14 km altitude. The procedures for receiver gain calibration and stability test are applicable to other instruments using the same calibration approach as that for SMART and GREAT. The structure of the high performance, modular, and extensible calibration program used and further developed for this thesis work is presented in the Appendix C.
Resumo:
Interactions in mobile devices normally happen in an explicit manner, which means that they are initiated by the users. Yet, users are typically unaware that they also interact implicitly with their devices. For instance, our hand pose changes naturally when we type text messages. Whilst the touchscreen captures finger touches, hand movements during this interaction however are unused. If this implicit hand movement is observed, it can be used as additional information to support or to enhance the users’ text entry experience. This thesis investigates how implicit sensing can be used to improve existing, standard interaction technique qualities. In particular, this thesis looks into enhancing front-of-device interaction through back-of-device and hand movement implicit sensing. We propose the investigation through machine learning techniques. We look into problems on how sensor data via implicit sensing can be used to predict a certain aspect of an interaction. For instance, one of the questions that this thesis attempts to answer is whether hand movement during a touch targeting task correlates with the touch position. This is a complex relationship to understand but can be best explained through machine learning. Using machine learning as a tool, such correlation can be measured, quantified, understood and used to make predictions on future touch position. Furthermore, this thesis also evaluates the predictive power of the sensor data. We show this through a number of studies. In Chapter 5 we show that probabilistic modelling of sensor inputs and recorded touch locations can be used to predict the general area of future touches on touchscreen. In Chapter 7, using SVM classifiers, we show that data from implicit sensing from general mobile interactions is user-specific. This can be used to identify users implicitly. In Chapter 6, we also show that touch interaction errors can be detected from sensor data. In our experiment, we show that there are sufficient distinguishable patterns between normal interaction signals and signals that are strongly correlated with interaction error. In all studies, we show that performance gain can be achieved by combining sensor inputs.
Resumo:
Accurate assessment of standing pasture biomass in livestock production systems is a major factor for improving feed planning. Several tools are available to achieve this, including the GrassMaster II capacitance meter. This tool relies on an electrical signal, which is modified by the surrounding pasture. There is limited knowledge on how this capacitance meter performs in Mediterranean pastures. Therefore, we evaluated the GrassMaster II under Mediterranean conditions to determine (i) the effect of pasture moisture content (PMC) on the meter’s ability to estimate pasture green matter (GM) and dry matter (DM) yields, and (ii) the spatial variability and temporal stability of corrected meter readings (CMR) and DM in a bio-diverse pasture. Field tests were carried out with typical pastures of the southern region of Portugal (grasses, legumes, mixture and volunteer annual species) and at different phenological stages (and different PMC). There were significant positive linear relations between CMR and GM (r2 = 0.60, P < 0.01) and CMR and DM (r2 = 0.35, P < 0.05) for all locations (n = 347). Weak relationships were found for PMC (%) v. slope and coefficient of determination for both GM and DM. A significant linear relation existed for CMR v. GM and DM for PMC >80% (r2= 0.57, P < 0.01, RMSE = 2856.7 kg ha–1, CVRMSE=17.1% to GM; and r2= 0.51, P < 0.01,RMSE = 353.7 kg ha–1, CVRMSE = 14.3% to DM). Therefore, under the conditions of this current study there exists an optimum PMC (%) for estimating both GM and DM with the GrassMaster II. Repeated-measurements taken at the same location on different dates and conditions in a bio-diverse pasture showed similar and stable patterns between CMR and DM (r2= 0.67, P < 0.01, RMSE = 136.1 kg ha–1, CVRMSE = 6.5%). The results indicate that the GrassMaster II in-situ technique could play a crucial role in assessing pasture mass to improve feed planning under Mediterranean conditions.
Resumo:
A new semi-implicit stress integration algorithm for finite strain plasticity (compatible with hyperelas- ticity) is introduced. Its most distinctive feature is the use of different parameterizations of equilibrium and reference configurations. Rotation terms (nonlinear trigonometric functions) are integrated explicitly and correspond to a change in the reference configuration. In contrast, relative Green–Lagrange strains (which are quadratic in terms of displacements) represent the equilibrium configuration implicitly. In addition, the adequacy of several objective stress rates in the semi-implicit context is studied. We para- metrize both reference and equilibrium configurations, in contrast with the so-called objective stress integration algorithms which use coinciding configurations. A single constitutive framework provides quantities needed by common discretization schemes. This is computationally convenient and robust, as all elements only need to provide pre-established quantities irrespectively of the constitutive model. In this work, mixed strain/stress control is used, as well as our smoothing algorithm for the complemen- tarity condition. Exceptional time-step robustness is achieved in elasto-plastic problems: often fewer than one-tenth of the typical number of time increments can be used with a quantifiable effect in accuracy. The proposed algorithm is general: all hyperelastic models and all classical elasto-plastic models can be employed. Plane-stress, Shell and 3D examples are used to illustrate the new algorithm. Both isotropic and anisotropic behavior is presented in elasto-plastic and hyperelastic examples.
Resumo:
Clouds are important in weather prediction, climate studies and aviation safety. Important parameters include cloud height, type and cover percentage. In this paper, the recent improvements in the development of a low-cost cloud height measurement setup are described. It is based on stereo vision with consumer digital cameras. The cameras positioning is calibrated using the position of stars in the night sky. An experimental uncertainty analysis of the calibration parameters is performed. Cloud height measurement results are presented and compared with LIDAR measurements.
Resumo:
The semiarid region of northeastern Brazil, the Caatinga, is extremely important due to its biodiversity and endemism. Measurements of plant physiology are crucial to the calibration of Dynamic Global Vegetation Models (DGVMs) that are currently used to simulate the responses of vegetation in face of global changes. In a field work realized in an area of preserved Caatinga forest located in Petrolina, Pernambuco, measurements of carbon assimilation (in response to light and CO2) were performed on 11 individuals of Poincianella microphylla, a native species that is abundant in this region. These data were used to calibrate the maximum carboxylation velocity (Vcmax) used in the INLAND model. The calibration techniques used were Multiple Linear Regression (MLR), and data mining techniques as the Classification And Regression Tree (CART) and K-MEANS. The results were compared to the UNCALIBRATED model. It was found that simulated Gross Primary Productivity (GPP) reached 72% of observed GPP when using the calibrated Vcmax values, whereas the UNCALIBRATED approach accounted for 42% of observed GPP. Thus, this work shows the benefits of calibrating DGVMs using field ecophysiological measurements, especially in areas where field data is scarce or non-existent, such as in the Caatinga
Resumo:
The primary aim of the research activity presented in this PhD thesis was the development of an innovative hardware and software solution for creating a unique tool for kinematics and electromyographic analysis of the human body in an ecological setting. For this purpose, innovative algorithms have been proposed regarding different aspects of inertial and magnetic data elaboration: magnetometer calibration and magnetic field mapping (Chapter 2), data calibration (Chapter 3) and sensor-fusion algorithm. Topics that may conflict with the confidentiality agreement between University of Bologna and NCS Lab will not be covered in this thesis. After developing and testing the wireless platform, research activities were focused on its clinical validation. The first clinical study aimed to evaluate the intra and interobserver reproducibility in order to evaluate three-dimensional humero-scapulo-thoracic kinematics in an outpatient setting (Chapter 4). A second study aimed to evaluate the effect of Latissimus Dorsi Tendon Transfer on shoulder kinematics and Latissimus Dorsi activation in humerus intra - extra rotations (Chapter 5). Results from both clinical studies have demonstrated the ability of the developed platform to enter into daily clinical practice, providing useful information for patients' rehabilitation.
Comparison of Explicit and Implicit Methods of Cross-Cultural Learning in an International Classroom
Resumo:
The paper addresses a gap in the literature concerning the difference between enhanced and not enhanced cross-cultural learning in an international classroom. The objective of the described research was to clarify if the environment of international classrooms could enhance cross-cultural competences significantly enough or if additional focus on cross-cultural learning as an explicit objective of learning activities would add substantially to the experience. The research question was defined as “how can a specific exercise focused on cross-cultural learning enhance the cross-cultural skills of university students in an international classroom?”. Surveys were conducted among interna- tional students in three leading Central-European Universities in Lithuania, Poland and Hungary to measure the increase of their cross-cultural competences. The Lithuanian and Polish classes were composed of international students and concentrated on International Management/Business topics (explicit method). The Hungarian survey was done in a general business class that just happened to be international in its composition (implicit method). Overall, our findings prove that the implicit method resulted in comparable, somewhat even stronger effectiveness than the explicit method. The study method included the analyses of students’ individual increases in each study dimension and construction of a compound measure to note the overall results. Our findings confirm the power of the international classroom as a stimulating environment for latent cross-cultural learning even without specific exercises focused on cross-cultural learning itself. However, the specific exercise did induce additional learning, especially related to cross-cultural awareness and communication with representatives of other cultures, even though the extent of that learning may be interpreted as underwhelming. The main conclusion from the study is that the diversity of the students engaged in a project provided an environment that supported cross-cultural learning, even without specific culture-focused reflections or exercises.
Resumo:
The study analyses the calibration process of a newly developed high-performance plug-in hybrid electric passenger car powertrain. The complexity of modern powertrains and the more and more restrictive regulations regarding pollutant emissions are the primary challenges for the calibration of a vehicle’s powertrain. In addition, the managers of OEM need to know as earlier as possible if the vehicle under development will meet the target technical features (emission included). This leads to the necessity for advanced calibration methodologies, in order to keep the development of the powertrain robust, time and cost effective. The suggested solution is the virtual calibration, that allows the tuning of control functions of a powertrain before having it built. The aim of this study is to calibrate virtually the hybrid control unit functions in order to optimize the pollutant emissions and the fuel consumption. Starting from the model of the conventional vehicle, the powertrain is then hybridized and integrated with emissions and aftertreatments models. After its validation, the hybrid control unit strategies are optimized using the Model-in-the-Loop testing methodology. The calibration activities will proceed thanks to the implementation of a Hardware-in-the-Loop environment, that will allow to test and calibrate the Engine and Transmission control units effectively, besides in a time and cost saving manner.
Resumo:
In this project an optimal pose selection method for the calibration of an overconstrained Cable-Driven Parallel robot is presented. This manipulator belongs to a subcategory of parallel robots, where the classic rigid "legs" are replaced by cables. Cables are flexible elements that bring advantages and disadvantages to the robot modeling. For this reason, there are many open research issues, and the calibration of geometric parameters is one of them. The identification of the geometry of a robot, in particular, is usually called Kinematic Calibration. Many methods have been proposed in the past years for the solution of the latter problem. Although these methods are based on calibration using different kinematic models, when the robot’s geometry becomes more complex, their robustness and reliability decrease. This fact makes the selection of the calibration poses more complicated. The position and the orientation of the endeffector in the workspace become important in terms of selection. Thus, in general, it is necessary to evaluate the robustness of the chosen calibration method, by means, for example, of a parameter such as the observability index. In fact, it is known from the theory, that the maximization of the above mentioned index identifies the best choice of calibration poses, and consequently, using this pose set may improve the calibration process. The objective of this thesis is to analyze optimization algorithms which aim to calculate an optimal choice of poses both in quantitative and qualitative terms. Quantitatively, because it is of fundamental importance to understand how many poses are needed. Not necessarily a greater number of poses leads to a better result. Qualitatively, because it is useful to understand if the selected combination of poses actually gives additional information in the process of the identification of the parameters.