941 resultados para Differences-in-Differences method
Resumo:
This paper presents a kernel density correlation based nonrigid point set matching method and shows its application in statistical model based 2D/3D reconstruction of a scaled, patient-specific model from an un-calibrated x-ray radiograph. In this method, both the reference point set and the floating point set are first represented using kernel density estimates. A correlation measure between these two kernel density estimates is then optimized to find a displacement field such that the floating point set is moved to the reference point set. Regularizations based on the overall deformation energy and the motion smoothness energy are used to constraint the displacement field for a robust point set matching. Incorporating this non-rigid point set matching method into a statistical model based 2D/3D reconstruction framework, we can reconstruct a scaled, patient-specific model from noisy edge points that are extracted directly from the x-ray radiograph by an edge detector. Our experiment conducted on datasets of two patients and six cadavers demonstrates a mean reconstruction error of 1.9 mm
Resumo:
Discussion of a new, innovative method for dating rocks, called laser ablation split stream (LASS) petrochronology, which is an in situ method that couples geochronological and geochemical data of minerals that remain in the rock matrix. The talk focuses on the application of this technique with U-Th-Pb dating of the phosphate minerals monazite and xenotine in metamorphic rocks. Examples from the Ruby Range in southwestern Montana and metamorphic core complexes in the northern Idaho panhandle will be explored.
Resumo:
An algorithm, based on ‘vertex priority values’ has been proposed to uniquely sequence and represent connectivity matrix of chemical structures of cyclic/ acyclic functionalized achiral hydrocarbons and their derivatives. In this method ‘vertex priority values’ have been assigned in terms of atomic weights, subgraph lengths, loops, and heteroatom contents. Subsequently the terminal vertices have been considered upon completing the sequencing of the core vertices. This approach provides a multilayered connectivity graph, which can be put to use in comparing two or more structures or parts thereof for any given purpose. Furthermore the basic vertex connection tables generated here are useful in the computation of characteristic matrices/ topological indices, automorphism groups, and in storing, sorting and retrieving of chemical structures from databases.
Resumo:
Background Workplace Health Promotion (WHP) is becoming increasingly important. Individualized exercise counselling provides a person-oriented measure of WHP which sets out to increase the level of sport activity. The aim of the present study was to check the efficacy of individualized exercise counselling in workplace. Method 86 employees received counselling in 60–90-min sessions. Their level of sport activity was ascertained both during the intervention and 6 weeks later. At T2, the perceived impulse to change their personal behaviour was also determined. The data were analysed by calculating Spearman’s rank correlation and conducting t-tests. Results Overall, the level of sport activity increased from 173 to 228 min/week (ES = 0.34). Particularly those people who had been inactive (ES = 0.76), < 90 min/week (ES = 0.63) or 90–180 min/week active (ES = 0.53) at T1, report a higher level of sport activities at T2. Discussion The findings speak in favour of integrating individualized exercise counselling into WHP, whereby the person-oriented measure should be supplemented by environmental strategies.
Resumo:
In this investigation, bromine-77 was produced with a medical cyclotron and imaged with gamma cameras. Br-77 emits a 240 kev photon with a half life of 56 hours. The C-Br bond is stronger than the C-I bond and bromine is not collected in the thyroid. Bromine can be used to label many organic molecules by methods analogous to radioiodination. The only North American source of Br-77 in the 70's and 80's was Los Alamos National Laboratory, but it discontinued production in 1989. In this method, a p,3n reaction on Br-77 produces Kr-77 which decays with a 1.2 hour half life to Br-77. A cyclotron generated 40 MeV proton beam is incident on a nearly saturated NaBr or LiBr solution contained in a copper or titanium target. A cooling chamber through which helium gas is flowed separates the solution from the cyclotron beam line. Helium gas is also flowed through the solution to extract Kr-77 gas. The mixture flows through a nitrogen trap where Kr-77 freezes and is allowed to decay to Br-77. Eight production runs were performed, three with a copper target and five with a titanium target with yields of 40, 104, 180, 679, 1080, 685, 762 and 118 uCi respectively. Gamma ray spectroscopy has shown the product to be very pure, however corrosion has been a major obstacle, causing the premature retirement of the copper target. Phantom and in-vivo rat nuclear images, and an autoradiograph in a rat are presented. The quality of the nuclear scans is reasonable and the autoradiograph reveals high isotope uptake in the renal parenchyma, a more moderate but uniform uptake in pulmonary and hepatic tissue, and low soft tissue uptake. There is no isotope uptake in the brain or the gastric mucosa. ^
Resumo:
Introduction. Erroneous answers in studies on the misinformation effect (ME) can be reduced in different ways. In some studies, ME was reduced by SM questions, warnings, or a low credibility of the source of post-event information (PEI). Results are inconsistent, however. Of course, a participant can deliberately decide to refrain from reporting a critical item only when the difference between the original event and the PEI is distinguishable in principle. We were interested in the question to what extent the influence of erroneous information on a central aspect of the original event can be reduced by different means applied singly or in combination. Method. With a 2 (credibility; high vs. low) x 2 (warning; present vs. absent) between subjects design and an additional control group that received neither misinformation nor a warning (N = 116), we examined the above-mentioned factors’ influence on the ME. Participants viewed a short video of a robbery. The critical item suggested in the PEI was that the victim was given a kick by the perpetrator (which he was actually not). The memory test consisted of a two-forced-choice recognition test followed by a SM test. Results. To our surprise, neither a main effect of erroneous PEI nor a main effect of credibility was found. The error rates for the critical item in the control group (50%) as well as in the high (65%) and low (52%) credibility condition without warning did not significantly differ. A warning about possible misleading information in the PEI significantly reduced the influence of misinformation in both credibility conditions by 32-37%. Using a SM question significantly reduced the error rate too, but only in the high credibility no warning condition. Conclusion and Future Research. Our results show that, contrary to a warning or the use of a SM question, low source credibility did not reduce the ME. The most striking finding was, however, the absence of a main effect of erroneous PEI. Due to the high error rate in the control group, we suspect that the wrong answers might have been caused either by the response format (recognition test) or by autosuggestion possibly promoted by the high schema-consistency of the critical item. First results of a post-study in which we used open-ended questions before the recognition test support the former assumption. Results of a replication of this study using open-ended questions prior to the recognition test will be available by June.
Resumo:
Measured rates of intrinsic clearance determined using cryopreserved trout hepatocytes can be extrapolated to the whole animal as a means of improving modeled bioaccumulation predictions for fish. To date, however, the intra- and interlaboratory reliability of this procedure has not been determined. In the present study, three laboratories determined in vitro intrinsic clearance of six reference compounds (benzo[a]pyrene, 4-nonylphenol, di-tert-butyl phenol, fenthion, methoxychlor and o-terphenyl) by conducting substrate depletion experiments with cryopreserved trout hepatocytes from a single source. O-terphenyl was excluded from the final analysis due to nonfirst-order depletion kinetics and significant loss from denatured controls. For the other five compounds, intralaboratory variability (% CV) in measured in vitro intrinsic clearance values ranged from 4.1 to 30%, while interlaboratory variability ranged from 27 to 61%. Predicted bioconcentration factors based on in vitro clearance values exhibited a reduced level of interlaboratory variability (5.3-38% CV). The results of this study demonstrate that cryopreserved trout hepatocytes can be used to reliably obtain in vitro intrinsic clearance of xenobiotics, which provides support for the application of this in vitro method in a weight-of-evidence approach to chemical bioaccumulation assessment.
Resumo:
The volume presents planktological and chemical data collected during cruise No. 51 of RV "Meteor" to the equatorial Atlantic (FGGE '79) from February to June 1979. A standard section along the meridian 22° W across the equator was sampled ten times between 2° S and 3° N. Together with a temperature and salinity profile, concentrations of oxygen, nutrients and chlorophyll a were analyzed in water samples down to a depth of 250 m. Solar radiation and light depths were measured for determination of primary productivity of the euphotic zone according to the simulated in situ method. Zooplankton biomass was estimated in 5 depth intervals down to 300 m by means of a multiple opening and closing net equipped with a mesh size of 100 µm.
Resumo:
Fractal and multifractal are concepts that have grown increasingly popular in recent years in the soil analysis, along with the development of fractal models. One of the common steps is to calculate the slope of a linear fit commonly using least squares method. This shouldn?t be a special problem, however, in many situations using experimental data the researcher has to select the range of scales at which is going to work neglecting the rest of points to achieve the best linearity that in this type of analysis is necessary. Robust regression is a form of regression analysis designed to circumvent some limitations of traditional parametric and non-parametric methods. In this method we don?t have to assume that the outlier point is simply an extreme observation drawn from the tail of a normal distribution not compromising the validity of the regression results. In this work we have evaluated the capacity of robust regression to select the points in the experimental data used trying to avoid subjective choices. Based on this analysis we have developed a new work methodology that implies two basic steps: ? Evaluation of the improvement of linear fitting when consecutive points are eliminated based on R pvalue. In this way we consider the implications of reducing the number of points. ? Evaluation of the significance of slope difference between fitting with the two extremes points and fitted with the available points. We compare the results applying this methodology and the common used least squares one. The data selected for these comparisons are coming from experimental soil roughness transect and simulated based on middle point displacement method adding tendencies and noise. The results are discussed indicating the advantages and disadvantages of each methodology.
Resumo:
EEn este proyecto se propone un método de ensayo experimental adecuado para diferenciar los morteros de yeso respecto de los de yeso y cal, estableciendo su porcentaje de Ca(OH)2. Se han empleado 3 métodos de ensayo diferentes, siendo el más adecuado el método de determinación del hidróxido de calcio para cales de construcción, contemplado en la norma española UNE-EN 459-2:2010, aunque se han añadido modificaciones para mejorar su aplicabilidad. El método de ensayo experimental propuesto permite calcular el contenido de hidróxido de calcio en porcentaje. ABSTRACT The objective of this project is to find a method from differentiate plaster mortar or lime and plaster mortar, since currently there is not any method to determine this difference. Then, normative methods used on other construction products are analyzed for prove their sensibility to determinate calcium hydroxide, evaluating its repeatability with the existing normative. Too, is evaluating alteration in the method of determination calcium sulfate by adding to the slaked lime, key component in the hydratation of plaster. Following with this work, it is waiting the modification of existing regulation in material plaster. Proposed method, must be approved for prescribers, manufacturers and final clients.
Resumo:
El propósito de esta tesis es la implementación de métodos eficientes de adaptación de mallas basados en ecuaciones adjuntas en el marco de discretizaciones de volúmenes finitos para mallas no estructuradas. La metodología basada en ecuaciones adjuntas optimiza la malla refinándola adecuadamente con el objetivo de mejorar la precisión de cálculo de un funcional de salida dado. El funcional suele ser una magnitud escalar de interés ingenieril obtenida por post-proceso de la solución, como por ejemplo, la resistencia o la sustentación aerodinámica. Usualmente, el método de adaptación adjunta está basado en una estimación a posteriori del error del funcional de salida mediante un promediado del residuo numérico con las variables adjuntas, “Dual Weighted Residual method” (DWR). Estas variables se obtienen de la solución del problema adjunto para el funcional seleccionado. El procedimiento habitual para introducir este método en códigos basados en discretizaciones de volúmenes finitos involucra la utilización de una malla auxiliar embebida obtenida por refinamiento uniforme de la malla inicial. El uso de esta malla implica un aumento significativo de los recursos computacionales (por ejemplo, en casos 3D el aumento de memoria requerida respecto a la que necesita el problema fluido inicial puede llegar a ser de un orden de magnitud). En esta tesis se propone un método alternativo basado en reformular la estimación del error del funcional en una malla auxiliar más basta y utilizar una técnica de estimación del error de truncación, denominada _ -estimation, para estimar los residuos que intervienen en el método DWR. Utilizando esta estimación del error se diseña un algoritmo de adaptación de mallas que conserva los ingredientes básicos de la adaptación adjunta estándar pero con un coste computacional asociado sensiblemente menor. La metodología de adaptación adjunta estándar y la propuesta en la tesis han sido introducidas en un código de volúmenes finitos utilizado habitualmente en la industria aeronáutica Europea. Se ha investigado la influencia de distintos parámetros numéricos que intervienen en el algoritmo. Finalmente, el método propuesto se compara con otras metodologías de adaptación de mallas y su eficiencia computacional se demuestra en una serie de casos representativos de interés aeronáutico. ABSTRACT The purpose of this thesis is the implementation of efficient grid adaptation methods based on the adjoint equations within the framework of finite volume methods (FVM) for unstructured grid solvers. The adjoint-based methodology aims at adapting grids to improve the accuracy of a functional output of interest, as for example, the aerodynamic drag or lift. The adjoint methodology is based on the a posteriori functional error estimation using the adjoint/dual-weighted residual method (DWR). In this method the error in a functional output can be directly related to local residual errors of the primal solution through the adjoint variables. These variables are obtained by solving the corresponding adjoint problem for the chosen functional. The common approach to introduce the DWR method within the FVM framework involves the use of an auxiliary embedded grid. The storage of this mesh demands high computational resources, i.e. over one order of magnitude increase in memory relative to the initial problem for 3D cases. In this thesis, an alternative methodology for adapting the grid is proposed. Specifically, the DWR approach for error estimation is re-formulated on a coarser mesh level using the _ -estimation method to approximate the truncation error. Then, an output-based adaptive algorithm is designed in such way that the basic ingredients of the standard adjoint method are retained but the computational cost is significantly reduced. The standard and the new proposed adjoint-based adaptive methodologies have been incorporated into a flow solver commonly used in the EU aeronautical industry. The influence of different numerical settings has been investigated. The proposed method has been compared against different grid adaptation approaches and the computational efficiency of the new method has been demonstrated on some representative aeronautical test cases.
Resumo:
In recent years a great number of high speed railway bridges have been constructed within the Spanish borders. Due to the demanding high speed trains route's geometrical requirements, bridges frequently show remarkable lengths. This fact is the main reason why railway bridges are overall longer than roadway bridges. In the same line, it is also worth highlighting the importance of high speed trains braking forces compared to vehicles. While vehicles braking forces can be tackled easily, the railway braking forces demand the existence of a fixed-point. It is generally located at abutments where the no-displacements requirement can be more easily achieved. In some other cases the fixed-point is placed in one of the interior columns. As a consequence of these bridges' length and the need of a fixed-point, temperature, creep and shrinkage strains lead to fairly significant deck displacements, which become greater with the distance to the fixed-point. These displacements need to be accommodated by the piers and bearings deformation. Regular elastomeric bearings are not able to allow such displacements and therefore are not suitable for this task. For this reason, the use of sliding PTFE POT bearings has been an extensive practice mainly because they permit sliding with low friction. This is not the only reason of the extensive use of these bearings to high-speed railways bridges. The value of the vertical loads at each bent is significantly higher than in roadway bridges. This is so mainly because the live loads due to trains traffic are much greater than vehicles. Thus, gravel rails foundation represents a non-negligible permanent load at all. All this together increases the value of vertical loads to be withstood. This high vertical load demand discards the use of conventional bearings for excessive compressions. The PTFE POT bearings' higher technology allows to accommodate this level of compression thanks to their design. The previously explained high-speed railway bridge configuration leads to a key fact regarding longitudinal horizontal loads (such as breaking forces) which is the transmission of these loads entirely to the fixed-point alone. Piers do not receive these longitudinal horizontal loads since PTFE POT bearings displayed are longitudinally free-sliding. This means that longitudinal horizontal actions on top of piers will not be forces but imposed displacements. This feature leads to the need to approach these piers design in a different manner that when piers are elastically linked to superstructure, which is the case of elastomeric bearings. In response to the previous, the main goal of this Thesis is to present a Design Method for columns displaying either longitudinally fixed POT bearings or longitudinally free PTFE POT bearings within bridges with fixed-point deck configuration, applicable to railway and road vehicles bridges. The method was developed with the intention to account for all major parameters that play a role in these columns behavior. The long process that has finally led to the method's formulation is rooted in the understanding of these column's behavior. All the assumptions made to elaborate the formulations contained in this method have been made in benefit of conservatives results. The singularity of the analysis of columns with this configuration is due to a combination of different aspects. One of the first steps of this work was to study they of these design aspects and understand the role each plays in the column's response. Among these aspects, special attention was dedicated to the column's own creep due to permanent actions such us rheological deck displacements, and also to the longitudinally guided PTFE POT bearings implications in the design of the column. The result of this study is the Design Method presented in this Thesis, that allows to work out a compliant vertical reinforcement distribution along the column. The design of horizontal reinforcement due to shear forces is not addressed in this Thesis. The method's formulations are meant to be applicable to the greatest number of cases, leaving to the engineer judgement many of the different parameters values. In this regard, this method is a helpful tool for a wide range of cases. The widespread use of European standards in the more recent years, in particular the so-called Eurocodes, has been one of the reasons why this Thesis has been developed in accordance with Eurocodes. Same trend has been followed for the bearings design implications, which are covered by the rather recent European code EN-1337. One of the most relevant aspects that this work has taken from the Eurocodes is the non-linear calculations security format. The biaxial bending simplified approach that shows the Design Method presented in this work also lies on Eurocodes recommendations. The columns under analysis are governed by a set of dimensionless parameters that are presented in this work. The identification of these parameters is a helpful for design purposes for two columns with identical dimensionless parameters may be designed together. The first group of these parameters have to do with the cross-sectional behavior, represented in the bending-curvature diagrams. A second group of parameters define the columns response. Thanks to this identification of the governing dimensionless parameters, it has been possible what has been named as Dimensionless Design Curves, which basically allows to obtain in a reduced time a preliminary vertical reinforcement column distribution. These curves are of little use nowadays, firstly because each family of curves refer to specific values of many different parameters and secondly because the use of computers allows for extremely quick and accurate calculations.
Resumo:
In an early paper Herbert Mohring (J. Poi Et on , 49 (1961)) presented a model for land rent distribution yielding the well-known result that the price of land must fall with the distance from the city center to offset transportation costs. Our paper is an extension of Mohring's model in which we relax some of his drastic simplifying assumptions. This extended model has been incorporated in a method for economic evaluation of city master plans which has been applied to a Swedish city. In this method the interdependence among housing, heating, and transportation, the dura-bility of urban structures, and the uncertainty of future demand are explicitly considered within a cost-benefit approach. Some empirical results from this pilot study concerning land rent distributions are also presented here.
Resumo:
En la era actual de la tecnología en la que nos encontramos se han experimentado una infinidad de avances. En concreto el interés por las comunicaciones por satélite y los, cada vez más exigentes, terminales móviles han provocado que se inicie líneas de investigación en el campo de las telecomunicaciones. En concreto el estudio de las Antenas de Bocina utilizadas como alimentadores en sistemas de satélite han generado gran interés por la comunidad académica y empresarial. En este Proyecto Fin de Carrera se realiza el estudio del Método de Análisis Modal, método por el cual podemos realizar el estudio del comportamiento de los campos en recintos cerrados y con discontinuidades. El tipo de discontinuidades que se estudia son geometrías cilíndricas en las que se practica un incremento abrupto en el radio de salida. El estudio para el caso inverso, es decir geometrías cilíndricas con radios de salida menores, también lo abordamos, es por esto que es posible la formación de corrugaciones. El proyecto es una continuación de otro anterior que se centra en la optimización de bocinas cónicas lisas. Aunque el método se puede aplicar a cualquier tipo de geometría en este proyecto lo aplicaremos sólo a geometrías cilíndricas dado que diseñaremos un alimentador de bocina cilíndrica con paredes corrugadas. Para el estudio y la implementación de las distintas formulaciones matemáticas haremos uso de la herramienta de cálculo MatLab, es así que podremos generar resultados como el diagrama de radiación de la antena diseñada. Dichos resultados serían contrastados con otro programa de análisis comercial. Se observaría que finalmente el método del análisis modal es una herramienta de cálculo robusta y consistente, que nos permite ahorrar tiempos de cálculo y nos presenta resultados similares a otras herramientas comerciales de análisis electromagnético. ABSTRACT. Technologies sector has made great progress . Specifically, in the area of the satellite communications and mobile communications . These have begun investigation lines in telecommunication areas. Particularly, the study about horn antennas use how feeders in satellite communications have generated high interest at University community and the space companies. The Final Project study is focused in the Method of Modal Analysis, this method allows to study the performance of Electromagnetic fields in closed places with discontinuities. This Project continues other project, where studied the optimization for smoothwall conical horns. In this work we will use this study for implemented a antenna cylindrical corrugated. For the study and implementation of special mathematical equations is necessary to use a calculus mathematical tool like MatLab, this software allows to draw the radiation pattern for antennas design. It should be emphasized that all results will be compare with others commercial softwares for Electromagnetic studies. Finally, we take a look at the method of modal analysis is a robust and consistent mathematical tool that save simulation time and show us similar results to other commercial softwares.
Resumo:
The development of this Master's Thesis is aimed at modeling active for estimating seismic hazard in Haití failures. It has been used zoned probabilistic method, both classical and hybrid, considering the incorporation of active faults as independent units in the calculation of seismic hazard. In this case, the rate of seismic moment is divided between the failures and the area seismogenetic same region. Failures included in this study are the Septentrional, Matheux and Enriquillo fault. We compared the results obtained by both methods to determine the importance of considering the faults in the calculation. In the first instance, updating the seismic catalog, homogenization, completeness analysis and purification was necessary to obtain a catalog ready to proceed to the estimation of the hazard. With the seismogenic zoning defined in previous studies and the updated seismic catalog, they are obtained relations Gutenberg-Richter recurrence of seismicity, superficial and deep in each area. Selected attenuation models were those used in (Benito et al., 2011), as the tectonic area of study is very similar to that of Central America. Its implementation has been through the development of a logical in which each branch is multiplied by an index based on the relevance of each combination of models. Results are presented as seismic hazard maps for return periods of 475, 975 and 2475 years, and spectral acceleration (SA) in structural periods: 0.1 - 0.2 - 0.5 - 1.0 and 2.0 seconds, and the difference accelerations between maps obtained by the classical method and the hybrid method. Maps realize the importance of including faults as separate items in the calculation of the hazard. The morphology of the zoned maps presented higher values in the area where the superficial and deep zone overlap. In the results it can determine that the minimum values in the zoned approach they outweigh the hybrid method, especially in areas where there are no faults. Higher values correspond to those obtained in fault zones by the hybrid method understanding that the contribution of the faults in this method is very important with high values. The maximum value of PGA obtained is close to Septentrional in 963gal, near to 460 gal in Matheux, and the Enriquillo fault line value reaches 760gal PGA in the Eastern segment and Western 730gal in the segment. This compares with that obtained in the zoned approach in this area where the value of PGA obtained was 240gal. These values are compared with those obtained by Frankel et al., (2011) with those have much similarity in values and morphology, in contrast to those presented by Benito et al., (2012) and the Standard Seismic Dominican Republic