910 resultados para forensics behavior model
Resumo:
This research examined to what extent Health Belief Model (HBM) and socioeconomic variables were useful in explaining the choice whether or not more effective contraceptive methods were used among married fecund women intending no additional births. The source of the data was the 1976 National Survey of Family Growth conducted under the auspices of the National Center for Health Statistics. Using the HBM as a framework for multivariate analyses limited support was found (using available measures) that the HBM components of motivation and perceived efficacy influence the likelihood of more effective contraceptive method use. Support was also found that modifying variables suggested by the HBM can influence the effects of HBM components on the likelihood of more effective method use. Socioeconomic variables were found, using all cases and some subgroups, to have a significant additional influence on the likelihood of use of more effective methods. Limited support was found for the concept that the greater the opportunity costs of an unwanted birth the greater the likelihood of use of more effective contraceptive methods. This research supports the use of HBM and socioeconomic variables to explain the likelihood of a protective health behavior, use of more effective contraception if no additional births are intended.^
Resumo:
Tuberous sclerosis complex (TSC) is a dominant tumor suppressor disorder caused by mutations in either TSC1 or TSC2. The proteins of these genes form a complex to inhibit the mammalian target of rapamycin complex 1 (mTORC1), which controls protein translation and cell growth. TSC causes substantial neuropathology, often leading to autism spectrum disorders (ASDs) in up to 60% of patients. The anatomic and neurophysiologic links between these two disorders are not well understood. However, both disorders share cerebellar abnormalities. Therefore, we have characterized a novel mouse model in which the Tsc2 gene was selectively deleted from cerebellar Purkinje cells (Tsc2f/-;Cre). These mice exhibit progressive Purkinje cell degeneration. Since loss of Purkinje cells is a well-reported postmortem finding in patients with ASD, we conducted a series of behavior tests to assess if Tsc2f/-;Cre mice displayed autistic-like deficits. Using the three chambered social choice assay, we found that Tsc2f/-;Cre mice showed behavioral deficits, exhibiting no preference between a stranger mouse and an inanimate object, or between a novel and a familiar mouse. Tsc2f/-;Cre mice also demonstrated increased repetitive behavior as assessed with marble burying activity. Altogether, these results demonstrate that loss of Tsc2 in Purkinje cells in a haploinsufficient background lead to behavioral deficits that are characteristic of human autism. Therefore, Purkinje cells loss and/or dysfunction may be an important link between TSC and ASD. Additionally, we have examined some of the cellular mechanisms resulting from mutations in Tsc2 leading to Purkinje cell death. Loss of Tsc2 led to upregulation of mTORC1 and increased cell size. As a consequence of increased protein synthesis, several cellular stress pathways were upregulated. Principally, these included altered calcium signaling, oxidative stress, and ER stress. Likely as a consequence of ER stress, there was also upregulation of ubiquitin and autophagy. Excitingly, treatment with an mTORC1 inhibitor, rapamycin attenuated mTORC1 activity and prevented Purkinje cell death by reducing of calcium signaling, the ER stress response, and ubiquitin. Remarkably, rapamycin treatment also reversed the social behavior deficits, thus providing a promising potential therapy for TSC-associated ASD.
Resumo:
The shape and morphology of the northern Barbados Ridge complex is largely controlled by the sediment yield and failure behavior in response to high lateral loads imposed by convergence. Loads in excess of sediment yield strength result in nonrecoverable deformations within the wedge, and failure strength acts as an upper limit beyond which stresses are released through thrust faults. Relatively high loading rates lead to delayed consolidation and in-situ pore pressures greater than hydrostatic. The sediment yield and failure behavior is described for any stress path by a generalized constitutive model. A yield locus delineates the onset of plastic (non-recoverable) deformation, as defined from the isotropic and anisotropic consolidation responses of high-quality 38-mm triaxial specimens; a failure envelope was obtained by shearing the same specimens in both triaxial compression and extension. The yield locus is shown to be rotated into extension space and is centered about a K-line greater than unity, suggesting that the in-situ major principal stress has rotated into the horizontal plane, and that the sediment wedge is being subjected to extensional effective stress paths.
Resumo:
Social desirability and the fear of sanctions can deter survey respondents from responding truthfully to sensitive questions. Self-reports on norm breaking behavior such as shoplifting, non-voting, or tax evasion may therefore be subject to considerable misreporting. To mitigate such misreporting, various indirect techniques for asking sensitive questions, such as the randomized response technique (RRT), have been proposed in the literature. In our study, we evaluate the viability of several variants of the RRT, including the recently proposed crosswise-model RRT, by comparing respondents’ self-reports on cheating in dice games to actual cheating behavior, thereby distinguishing between false negatives (underreporting) and false positives (overreporting). The study has been implemented as an online survey on Amazon Mechanical Turk (N = 6,505). Our results indicate that the forced-response RRT and the unrelated-question RRT, as implemented in our survey, fail to reduce the level of misreporting compared to conventional direct questioning. For the crosswise-model RRT, we do observe a reduction of false negatives (that is, an increase in the proportion of cheaters who admit having cheated). At the same time, however, there is an increase in false positives (that is, an increase in non-cheaters who falsely admit having cheated). Overall, our findings suggest that none of the implemented sensitive questions techniques substantially outperforms direct questioning. Furthermore, our study demonstrates the importance of distinguishing false negatives and false positives when evaluating the validity of sensitive question techniques.
Resumo:
In Montiel Olea and Strzalecki (2014), authors have axiomatically developed an algorithm to infer the parameters of beta-delta model of cognitive bias (present and future biases). While this is extremely useful, it allows the implied beta to become very large when the response is impatient in the future choices relative to present choices, i.e., when there is a strong future bias. I modify the model to further exponentiate the functional form to get more reasonable beta values.
Resumo:
A two-dimensional finite element model of current flow in the front surface of a PV cell is presented. In order to validate this model we perform an experimental test. Later, particular attention is paid to the effects of non-uniform illumination in the finger direction which is typical in a linear concentrator system. Fill factor, open circuit voltage and efficiency are shown to decrease with increasing degree of non-uniform illumination. It is shown that these detrimental effects can be mitigated significantly by reoptimization of the number of front surface metallization fingers to suit the degree of non-uniformity. The behavior of current flow in the front surface of a cell operating at open circuit voltage under non-uniform illumination is discussed in detail.
Resumo:
Received signal strength-based localization systems usually rely on a calibration process that aims at characterizing the propagation channel. However, due to the changing environmental dynamics, the behavior of the channel may change after some time, thus, recalibration processes are necessary to maintain the positioning accuracy. This paper proposes a dynamic calibration method to initially calibrate and subsequently update the parameters of the propagation channel model using a Least Mean Squares approach. The method assumes that each anchor node in the localization infrastructure is characterized by its own propagation channel model. In practice, a set of sniffers is used to collect RSS samples, which will be used to automatically calibrate each channel model by iteratively minimizing the positioning error. The proposed method is validated through numerical simulation, showing that the positioning error of the mobile nodes is effectively reduced. Furthermore, the method has a very low computational cost; therefore it can be used in real-time operation for wireless resource-constrained nodes.
Resumo:
Membrane systems are parallel and bioinspired systems which simulate membranes behavior when processing information. As a part of unconventional computing, P-systems are proven to be effective in solvingcomplexproblems. A software technique is presented here that obtain good results when dealing with such problems. The rules application phase is studied and updated accordingly to obtain the desired results. Certain rules are candidate to be eliminated which can make the model improving in terms of time.
Resumo:
Numerous damage models have been developed in order to analyze seismic behavior. Among the different possibilities existing in the literature, it is very clear that models developed along the lines of continuum damage mechanics are more consistent with the definition of damage as a phenomenon with mechanical consequences because they include explicitly the coupling between damage and mechanical behavior. On the other hand, for seismic processes, phenomena such as low cycle fatigue may have a pronounced effect on the overall behavior of the frames and, therefore, its consideration turns out to be very important. However, most of existing models evaluate the damage only as a function of the maximum amplitude of cyclic deformation without considering the number of cycles. In this paper, a generalization of the simplified model proposed by Cipollina et al. [Cipollina A, López-Hinojosa A, Flórez-López J. Comput Struct 1995;54:1113–26] is made in order to include the low cycle fatigue. Such a model employs in its formulation irreversible thermodynamics and internal state variable theory.
Resumo:
Corrosion of reinforcing steel in concrete due to chloride ingress is one of the main causes of the deterioration of reinforced concrete structures. Structures most affected by such a corrosion are marine zone buildings and structures exposed to de-icing salts like highways and bridges. Such process is accompanied by an increase in volume of the corrosión products on the rebarsconcrete interface. Depending on the level of oxidation, iron can expand as much as six times its original volume. This increase in volume exerts tensile stresses in the surrounding concrete which result in cracking and spalling of the concrete cover if the concrete tensile strength is exceeded. The mechanism by which steel embedded in concrete corrodes in presence of chloride is the local breakdown of the passive layer formed in the highly alkaline condition of the concrete. It is assumed that corrosion initiates when a critical chloride content reaches the rebar surface. The mathematical formulation idealized the corrosion sequence as a two-stage process: an initiation stage, during which chloride ions penetrate to the reinforcing steel surface and depassivate it, and a propagation stage, in which active corrosion takes place until cracking of the concrete cover has occurred. The aim of this research is to develop computer tools to evaluate the duration of the service life of reinforced concrete structures, considering both the initiation and propagation periods. Such tools must offer a friendly interface to facilitate its use by the researchers even though their background is not in numerical simulation. For the evaluation of the initiation period different tools have been developed: Program TavProbabilidade: provides means to carry out a probability analysis of a chloride ingress model. Such a tool is necessary due to the lack of data and general uncertainties associated with the phenomenon of the chloride diffusion. It differs from the deterministic approach because it computes not just a chloride profile at a certain age, but a range of chloride profiles for each probability or occurrence. Program TavProbabilidade_Fiabilidade: carries out reliability analyses of the initiation period. It takes into account the critical value of the chloride concentration on the steel that causes breakdown of the passive layer and the beginning of the propagation stage. It differs from the deterministic analysis in that it does not predict if the corrosion is going to begin or not, but to quantifies the probability of corrosion initiation. Program TavDif_1D: was created to do a one dimension deterministic analysis of the chloride diffusion process by the finite element method (FEM) which numerically solves Fick’second Law. Despite of the different FEM solver already developed in one dimension, the decision to create a new code (TavDif_1D) was taken because of the need to have a solver with friendly interface for pre- and post-process according to the need of IETCC. An innovative tool was also developed with a systematic method devised to compare the ability of the different 1D models to predict the actual evolution of chloride ingress based on experimental measurements, and also to quantify the degree of agreement of the models with each others. For the evaluation of the entire service life of the structure: a computer program has been developed using finite elements method to do the coupling of both service life periods: initiation and propagation. The program for 2D (TavDif_2D) allows the complementary use of two external programs in a unique friendly interface: • GMSH - an finite element mesh generator and post-processing viewer • OOFEM – a finite element solver. This program (TavDif_2D) is responsible to decide in each time step when and where to start applying the boundary conditions of fracture mechanics module in function of the amount of chloride concentration and corrosion parameters (Icorr, etc). This program is also responsible to verify the presence and the degree of fracture in each element to send the Information of diffusion coefficient variation with the crack width. • GMSH - an finite element mesh generator and post-processing viewer • OOFEM – a finite element solver. The advantages of the FEM with the interface provided by the tool are: • the flexibility to input the data such as material property and boundary conditions as time dependent function. • the flexibility to predict the chloride concentration profile for different geometries. • the possibility to couple chloride diffusion (initiation stage) with chemical and mechanical behavior (propagation stage). The OOFEM code had to be modified to accept temperature, humidity and the time dependent values for the material properties, which is necessary to adequately describe the environmental variations. A 3-D simulation has been performed to simulate the behavior of the beam on both, action of the external load and the internal load caused by the corrosion products, using elements of imbedded fracture in order to plot the curve of the deflection of the central region of the beam versus the external load to compare with the experimental data.
Resumo:
We consider non-negative solution of a chemotaxis system with non constant chemotaxis sensitivity function X. This system appears as a limit case of a model formorphogenesis proposed by Bollenbach et al. (Phys. Rev. E. 75, 2007).Under suitable boundary conditions, modeling the presence of a morphogen source at x=0, we prove the existence of a global and bounded weak solution using an approximation by problems where diffusion is introduced in the ordinary differential equation. Moreover,we prove the convergence of the solution to the unique steady state provided that ? is small and ? is large enough. Numerical simulations both illustrate these results and give rise to further conjectures on the solution behavior that go beyond the rigorously proved statements.
Resumo:
Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.
Resumo:
During the last years cities around the world have invested important quantities of money in measures for reducing congestion and car-trips. Investments which are nothing but potential solutions for the well-known urban sprawl phenomenon, also called the “development trap” that leads to further congestion and a higher proportion of our time spent in slow moving cars. Over the path of this searching for solutions, the complex relationship between urban environment and travel behaviour has been studied in a number of cases. The main question on discussion is, how to encourage multi-stop tours? Thus, the objective of this paper is to verify whether unobserved factors influence tour complexity. For this purpose, we use a data-base from a survey conducted in 2006-2007 in Madrid, a suitable case study for analyzing urban sprawl due to new urban developments and substantial changes in mobility patterns in the last years. A total of 943 individuals were interviewed from 3 selected neighbourhoods (CBD, urban and suburban). We study the effect of unobserved factors on trip frequency. This paper present the estimation of an hybrid model where the latent variable is called propensity to travel and the discrete choice model is composed by 5 alternatives of tour type. The results show that characteristics of the neighbourhoods in Madrid are important to explain trip frequency. The influence of land use variables on trip generation is clear and in particular the presence of commercial retails. Through estimation of elasticities and forecasting we determine to what extent land-use policy measures modify travel demand. Comparing aggregate elasticities with percentage variations, it can be seen that percentage variations could lead to inconsistent results. The result shows that hybrid models better explain travel behavior than traditional discrete choice models.