918 resultados para Software testing. Problem-oriented programming. Teachingmethodology


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop a model where a free genetic test reveals whether the individual tested has a low or high probability of developing a disease. A costly prevention effort allows high-risk agents to decrease the probability of developing the disease. Agents are not obliged to take the test, but must disclose its results to insurers. Insurers offer separating contracts which take into account the individual risk, so that taking the test is associated to a discrimination risk. We study the individual decisions to take the test and to undertake the prevention effort as a function of the effort cost and of its e¢ ciency. We obtain that, if effort is observable by insurers, agents undertake the test only if the effort cost is neither too large nor too low. If the effort cost is not observable by insurers, they face a moral hazard problem which induces them to under-provide insurance. We obtain the counterintuitive result that moral hazard increases the value of the test if the effort cost is low enough. Also, agents may perform the test for lower levels of prevention e¢ ciency when effort is not observable

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los Sistemas de Información Geográfica (SIG) son una herramienta válida para el estudio de los paisajes antiguos. Los SIG se pueden configurar como un conjunto de medios analíticos útiles para comprender la dimensión espacial de las formaciones sociales y su dinámica histórica. En otras palabras, los SIG posibilitan un acercamiento válido a la racionalidad de las conductas espaciales de una comunidad y a las pautas globales de una sociedad que quedan plasmadas en la morfología de un paisaje. Atendiendo a la abundante y creciente oferta de programas informáticos que procesan y analizan información espacial, enfocaremos las ventajas que supone la adopción de soluciones libres y de código abierto para la investigación arqueológica de los paisajes. Como ejemplo presentaremos el modelado coste-distancia aplicado a un problema locacional arqueológico: la evaluación de la ubicación de los asentamientos respecto a los recursos disponibles en su entorno. El enfoque experimental ha sido aplicado al poblamiento castreño de la comarca de La Cabrera (León). Se presentará una descripción detallada de cómo crear tramos isócronos basados en el cálculo de los costes anisótropos inherentes a la locomoción pedestre. Asimismo, la ventaja que supone la adopción del SIG GRASS para la implementación del análisis

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"Student’s Watcher” is a small Web application which wants to show in a visual, simple and fast way, the evolution of the students. The main project table displays such things as marks and comments about students. We can add a comment for each mark to explain why this mark. The objective is to be able to know if some student has a problem, how is going his year, marks in other courses, or even, to know if he has a bad week in a different subjects. We can see the evolution of students in past years to do an objective comparison. It also allows inserting global comments of student, we have a list of these, and all professors can add new ones, where we can see more general valuations. “Student’s Watcher” was begun in ASP.net, but finally my project would be developed in PHP, HTML and CSS. This project wants to be a comparison between two of most important languages used nowadays, ASPX and PHP

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La gestió de xarxes és un camp molt ampli i inclou molts aspectes diferents. Aquesta tesi doctoral està centrada en la gestió dels recursos en les xarxes de banda ampla que disposin de mecanismes per fer reserves de recursos, com per exemple Asynchronous Transfer Mode (ATM) o Multi-Protocol Label Switching (MPLS). Es poden establir xarxes lògiques utilitzant els Virtual Paths (VP) d'ATM o els Label Switched Paths (LSP) de MPLS, als que anomenem genèricament camins lògics. Els usuaris de la xarxa utilitzen doncs aquests camins lògics, que poden tenir recursos assignats, per establir les seves comunicacions. A més, els camins lògics són molt flexibles i les seves característiques es poden canviar dinàmicament. Aquest treball, se centra, en particular, en la gestió dinàmica d'aquesta xarxa lògica per tal de maximitzar-ne el rendiment i adaptar-la a les connexions ofertes. En aquest escenari, hi ha diversos mecanismes que poden afectar i modificar les característiques dels camins lògics (ample de banda, ruta, etc.). Aquests mecanismes inclouen els de balanceig de la càrrega (reassignació d'ample de banda i reencaminament) i els de restauració de fallades (ús de camins lògics de backup). Aquests dos mecanismes poden modificar la xarxa lògica i gestionar els recursos (ample de banda) dels enllaços físics. Per tant, existeix la necessitat de coordinar aquests mecanismes per evitar possibles interferències. La gestió de recursos convencional que fa ús de la xarxa lògica, recalcula periòdicament (per exemple cada hora o cada dia) tota la xarxa lògica d'una forma centralitzada. Això introdueix el problema que els reajustaments de la xarxa lògica no es realitzen en el moment en què realment hi ha problemes. D'altra banda també introdueix la necessitat de mantenir una visió centralitzada de tota la xarxa. En aquesta tesi, es proposa una arquitectura distribuïda basada en un sistema multi agent. L'objectiu principal d'aquesta arquitectura és realitzar de forma conjunta i coordinada la gestió de recursos a nivell de xarxa lògica, integrant els mecanismes de reajustament d'ample de banda amb els mecanismes de restauració preplanejada, inclosa la gestió de l'ample de banda reservada per a la restauració. Es proposa que aquesta gestió es porti a terme d'una forma contínua, no periòdica, actuant quan es detecta el problema (quan un camí lògic està congestionat, o sigui, quan està rebutjant peticions de connexió dels usuaris perquè està saturat) i d'una forma completament distribuïda, o sigui, sense mantenir una visió global de la xarxa. Així doncs, l'arquitectura proposada realitza petits rearranjaments a la xarxa lògica adaptant-la d'una forma contínua a la demanda dels usuaris. L'arquitectura proposada també té en consideració altres objectius com l'escalabilitat, la modularitat, la robustesa, la flexibilitat i la simplicitat. El sistema multi agent proposat està estructurat en dues capes d'agents: els agents de monitorització (M) i els de rendiment (P). Aquests agents estan situats en els diferents nodes de la xarxa: hi ha un agent P i diversos agents M a cada node; aquests últims subordinats als P. Per tant l'arquitectura proposada es pot veure com una jerarquia d'agents. Cada agent és responsable de monitoritzar i controlar els recursos als que està assignat. S'han realitzat diferents experiments utilitzant un simulador distribuït a nivell de connexió proposat per nosaltres mateixos. Els resultats mostren que l'arquitectura proposada és capaç de realitzar les tasques assignades de detecció de la congestió, reassignació dinàmica d'ample de banda i reencaminament d'una forma coordinada amb els mecanismes de restauració preplanejada i gestió de l'ample de banda reservat per la restauració. L'arquitectura distribuïda ofereix una escalabilitat i robustesa acceptables gràcies a la seva flexibilitat i modularitat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to narrow the gap between two different control techniques: the continuous control and the discrete event control techniques DES. This gap can be reduced by the study of Hybrid systems, and by interpreting as Hybrid systems the majority of large-scale systems. In particular, when looking deeply into a process, it is often possible to identify interaction between discrete and continuous signals. Hybrid systems are systems that have both continuous, and discrete signals. Continuous signals are generally supposed continuous and differentiable in time, since discrete signals are neither continuous nor differentiable in time due to their abrupt changes in time. Continuous signals often represent the measure of natural physical magnitudes such as temperature, pressure etc. The discrete signals are normally artificial signals, operated by human artefacts as current, voltage, light etc. Typical processes modelled as Hybrid systems are production systems, chemical process, or continuos production when time and continuous measures interacts with the transport, and stock inventory system. Complex systems as manufacturing lines are hybrid in a global sense. They can be decomposed into several subsystems, and their links. Another motivation for the study of Hybrid systems is the tools developed by other research domains. These tools benefit from the use of temporal logic for the analysis of several properties of Hybrid systems model, and use it to design systems and controllers, which satisfies physical or imposed restrictions. This thesis is focused in particular types of systems with discrete and continuous signals in interaction. That can be modelled hard non-linealities, such as hysteresis, jumps in the state, limit cycles, etc. and their possible non-deterministic future behaviour expressed by an interpretable model description. The Hybrid systems treated in this work are systems with several discrete states, always less than thirty states (it can arrive to NP hard problem), and continuous dynamics evolving with expression: with Ki ¡ Rn constant vectors or matrices for X components vector. In several states the continuous evolution can be several of them Ki = 0. In this formulation, the mathematics can express Time invariant linear system. By the use of this expression for a local part, the combination of several local linear models is possible to represent non-linear systems. And with the interaction with discrete events of the system the model can compose non-linear Hybrid systems. Especially multistage processes with high continuous dynamics are well represented by the proposed methodology. Sate vectors with more than two components, as third order models or higher is well approximated by the proposed approximation. Flexible belt transmission, chemical reactions with initial start-up and mobile robots with important friction are several physical systems, which profits from the benefits of proposed methodology (accuracy). The motivation of this thesis is to obtain a solution that can control and drive the Hybrid systems from the origin or starting point to the goal. How to obtain this solution, and which is the best solution in terms of one cost function subject to the physical restrictions and control actions is analysed. Hybrid systems that have several possible states, different ways to drive the system to the goal and different continuous control signals are problems that motivate this research. The requirements of the system on which we work is: a model that can represent the behaviour of the non-linear systems, and that possibilities the prediction of possible future behaviour for the model, in order to apply an supervisor which decides the optimal and secure action to drive the system toward the goal. Specific problems can be determined by the use of this kind of hybrid models are: - The unity of order. - Control the system along a reachable path. - Control the system in a safe path. - Optimise the cost function. - Modularity of control The proposed model solves the specified problems in the switching models problem, the initial condition calculus and the unity of the order models. Continuous and discrete phenomena are represented in Linear hybrid models, defined with defined eighth-tuple parameters to model different types of hybrid phenomena. Applying a transformation over the state vector : for LTI system we obtain from a two-dimensional SS a single parameter, alpha, which still maintains the dynamical information. Combining this parameter with the system output, a complete description of the system is obtained in a form of a graph in polar representation. Using Tagaki-Sugeno type III is a fuzzy model which include linear time invariant LTI models for each local model, the fuzzyfication of different LTI local model gives as a result a non-linear time invariant model. In our case the output and the alpha measure govern the membership function. Hybrid systems control is a huge task, the processes need to be guided from the Starting point to the desired End point, passing a through of different specific states and points in the trajectory. The system can be structured in different levels of abstraction and the control in three layers for the Hybrid systems from planning the process to produce the actions, these are the planning, the process and control layer. In this case the algorithms will be applied to robotics ¡V a domain where improvements are well accepted ¡V it is expected to find a simple repetitive processes for which the extra effort in complexity can be compensated by some cost reductions. It may be also interesting to implement some control optimisation to processes such as fuel injection, DC-DC converters etc. In order to apply the RW theory of discrete event systems on a Hybrid system, we must abstract the continuous signals and to project the events generated for these signals, to obtain new sets of observable and controllable events. Ramadge & Wonham¡¦s theory along with the TCT software give a Controllable Sublanguage of the legal language generated for a Discrete Event System (DES). Continuous abstraction transforms predicates over continuous variables into controllable or uncontrollable events, and modifies the set of uncontrollable, controllable observable and unobservable events. Continuous signals produce into the system virtual events, when this crosses the bound limits. If this event is deterministic, they can be projected. It is necessary to determine the controllability of this event, in order to assign this to the corresponding set, , controllable, uncontrollable, observable and unobservable set of events. Find optimal trajectories in order to minimise some cost function is the goal of the modelling procedure. Mathematical model for the system allows the user to apply mathematical techniques over this expression. These possibilities are, to minimise a specific cost function, to obtain optimal controllers and to approximate a specific trajectory. The combination of the Dynamic Programming with Bellman Principle of optimality, give us the procedure to solve the minimum time trajectory for Hybrid systems. The problem is greater when there exists interaction between adjacent states. In Hybrid systems the problem is to determine the partial set points to be applied at the local models. Optimal controller can be implemented in each local model in order to assure the minimisation of the local costs. The solution of this problem needs to give us the trajectory to follow the system. Trajectory marked by a set of set points to force the system to passing over them. Several ways are possible to drive the system from the Starting point Xi to the End point Xf. Different ways are interesting in: dynamic sense, minimum states, approximation at set points, etc. These ways need to be safe and viable and RchW. And only one of them must to be applied, normally the best, which minimises the proposed cost function. A Reachable Way, this means the controllable way and safe, will be evaluated in order to obtain which one minimises the cost function. Contribution of this work is a complete framework to work with the majority Hybrid systems, the procedures to model, control and supervise are defined and explained and its use is demonstrated. Also explained is the procedure to model the systems to be analysed for automatic verification. Great improvements were obtained by using this methodology in comparison to using other piecewise linear approximations. It is demonstrated in particular cases this methodology can provide best approximation. The most important contribution of this work, is the Alpha approximation for non-linear systems with high dynamics While this kind of process is not typical, but in this case the Alpha approximation is the best linear approximation to use, and give a compact representation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La principal contribución de esta Tesis es la propuesta de un modelo de agente BDI graduado (g-BDI) que permita especificar una arquitetura de agente capaz de representar y razonar con actitudes mentales graduadas. Consideramos que una arquitectura BDI más exible permitirá desarrollar agentes que alcancen mejor performance en entornos inciertos y dinámicos, al servicio de otros agentes (humanos o no) que puedan tener un conjunto de motivaciones graduadas. En el modelo g-BDI, las actitudes graduadas del agente tienen una representación explícita y adecuada. Los grados en las creencias representan la medida en que el agente cree que una fórmula es verdadera, en los deseos positivos o negativos permiten al agente establecer respectivamente, diferentes niveles de preferencias o de rechazo. Las graduaciones en las intenciones también dan una medida de preferencia pero en este caso, modelan el costo/beneficio que le trae al agente alcanzar una meta. Luego, a partir de la representación e interacción de estas actitudes graduadas, pueden ser modelados agentes que muestren diferentes tipos de comportamiento. La formalización del modelo g-BDI está basada en los sistemas multi-contextos. Diferentes lógicas modales multivaluadas se han propuesto para representar y razonar sobre las creencias, deseos e intenciones, presentando en cada caso una axiomática completa y consistente. Para tratar con la semántica operacional del modelo de agente, primero se definió un calculus para la ejecución de sistemas multi-contextos, denominado Multi-context calculus. Luego, mediante este calculus se le ha dado al modelo g-BDI semántica computacional. Por otra parte, se ha presentado una metodología para la ingeniería de agentes g-BDI en un escenario multiagente. El objeto de esta propuesta es guiar el diseño de sistemas multiagentes, a partir de un problema del mundo real. Por medio del desarrollo de un sistema recomendador en turismo como caso de estudio, donde el agente recomendador tiene una arquitectura g-BDI, se ha mostrado que este modelo es valioso para diseñar e implementar agentes concretos. Finalmente, usando este caso de estudio se ha realizado una experimentación sobre la flexibilidad y performance del modelo de agente g-BDI, demostrando que es útil para desarrollar agentes que manifiesten conductas diversas. También se ha mostrado que los resultados obtenidos con estos agentes recomendadores modelizados con actitudes graduadas, son mejores que aquellos alcanzados por los agentes con actitudes no-graduadas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A recent area for investigation into the development of adaptable robot control is the use of living neuronal networks to control a mobile robot. The so-called Animat paradigm comprises a neuronal network (the ‘brain’) connected to an external embodiment (in this case a mobile robot), facilitating potentially robust, adaptable robot control and increased understanding of neural processes. Sensory input from the robot is provided to the neuronal network via stimulation on a number of electrodes embedded in a specialist Petri dish (Multi Electrode Array (MEA)); accurate control of this stimulation is vital. We present software tools allowing precise, near real-time control of electrical stimulation on MEAs, with fast switching between electrodes and the application of custom stimulus waveforms. These Linux-based tools are compatible with the widely used MEABench data acquisition system. Benefits include rapid stimulus modulation in response to neuronal activity (closed loop) and batch processing of stimulation protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An eddy current testing system consists of a multi-sensor probe, a computer and a special expansion card and software for data-collection and analysis. The probe incorporates an excitation coil, and sensor coils; at least one sensor coil is a lateral current-normal coil and at least one is a current perturbation coil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An eddy current testing system consists of a multi-sensor probe, computer and a special expansion card and software for data collection and analysis. The probe incorporates an excitation coil, and sensor coils; at least one sensor coil is a lateral current-normal coil and at least one is a current perturbation coil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Central Brazil, the long-term sustainability of beef cattle systems is under threat over vast tracts of farming areas, as more than half of the 50 million hectares of sown pastures are suffering from degradation. Overgrazing practised to maintain high stocking rates is regarded as one of the main causes. High stocking rates are deliberate and crucial decisions taken by the farmers, which appear paradoxical, even irrational given the state of knowledge regarding the consequences of overgrazing. The phenomenon however appears inextricably linked with the objectives that farmers hold. In this research those objectives were elicited first and from their ranking two, ‘asset value of cattle (representing cattle ownership)' and ‘present value of economic returns', were chosen to develop an original bi-criteria Compromise Programming model to test various hypotheses postulated to explain the overgrazing behaviour. As part of the model a pasture productivity index is derived to estimate the pasture recovery cost. Different scenarios based on farmers' attitudes towards overgrazing, pasture costs and capital availability were analysed. The results of the model runs show that benefits from holding more cattle can outweigh the increased pasture recovery and maintenance costs. This result undermines the hypothesis that farmers practise overgrazing because they are unaware or uncaring about overgrazing costs. An appropriate approach to the problem of pasture degradation requires information on the economics, and its interplay with farmers' objectives, for a wide range of pasture recovery and maintenance methods. Seen within the context of farmers' objectives, some level of overgrazing appears rational. Advocacy of the simple ‘no overgrazing' rule is an insufficient strategy to maintain the long-term sustainability of the beef production systems in Central Brazil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Central Brazil, the long-term, sustainability of beef cattle systems is under threat over vast tracts of farming areas, as more than half of the 50 million hectares of sown pastures are suffering from. degradation. Overgrazing practised to maintain high stocking rates is regarded as one of the main causes. High stocking rates are deliberate and crucial decisions taken by the farmers, which appear paradoxical, even irrational given the state of knowledge regarding the consequences of overgrazing. The phenomenon however appears inextricably linked with the objectives that farmers hold. In this research those objectives were elicited first and from their ranking two, 'asset value of cattle (representing cattle ownership and 'present value of economic returns', were chosen to develop an original bi-criteria Compromise Programming model to test various hypotheses postulated to explain the overgrazing behaviour. As part of the model a pasture productivity index is derived to estimate the pasture recovery cost. Different scenarios based on farmers' attitudes towards overgrazing, pasture costs and capital availability were analysed. The results of the model runs show that benefits from holding more cattle can outweigh the increased pasture recovery and maintenance costs. This result undermines the hypothesis that farmers practise overgrazing because they are unaware or uncaring caring about overgrazing costs. An appropriate approach to the problem of pasture degradation requires information on the economics,and its interplay with farmers' objectives, for a wide range of pasture recovery and maintenance methods. Seen within the context of farmers' objectives, some level of overgrazing appears rational. Advocacy of the simple 'no overgrazing' rule is an insufficient strategy to maintain the long-term sustainability of the beef production systems in Central Brazil. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers methods for testing for superiority or non-inferiority in active-control trials with binary data, when the relative treatment effect is expressed as an odds ratio. Three asymptotic tests for the log-odds ratio based on the unconditional binary likelihood are presented, namely the likelihood ratio, Wald and score tests. All three tests can be implemented straightforwardly in standard statistical software packages, as can the corresponding confidence intervals. Simulations indicate that the three alternatives are similar in terms of the Type I error, with values close to the nominal level. However, when the non-inferiority margin becomes large, the score test slightly exceeds the nominal level. In general, the highest power is obtained from the score test, although all three tests are similar and the observed differences in power are not of practical importance. Copyright (C) 2007 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper tackles the path planning problem for oriented vehicles travelling in the non-Euclidean 3-Dimensional space; spherical space S3. For such problem, the orientation of the vehicle is naturally represented by orthonormal frame bundle; the rotation group SO(4). Orthonormal frame bundles of space forms coincide with their isometry groups and therefore the focus shifts to control systems defined on Lie groups. The oriented vehicles, in this case, are constrained to travel at constant speed in a forward direction and their angular velocities directly controlled. In this paper we identify controls that induce steady motions of these oriented vehicles and yield closed form parametric expressions for these motions. The paths these vehicles trace are defined explicitly in terms of the controls and therefore invariant with respect to the coordinate system used to describe the motion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Life-history theories of the early programming of human reproductive strategy stipulate that early rearing experience, including that reflected in infant-parent attachment security, regulates psychological, behavioral, and reproductive development. We tested the hypothesis that infant attachment insecurity, compared with infant attachment security, at the age of 15 months predicts earlier pubertal maturation. Focusing on 373 White females enrolled in the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development, we gathered data from annual physical exams from the ages of 9½ years to 15½ years and from self-reported age of menarche. Results revealed that individuals who had been insecure infants initiated and completed pubertal development earlier and had an earlier age of menarche compared with individuals who had been secure infants, even after accounting for age of menarche in the infants’ mothers. These results support a conditional-adaptational view of individual differences in attachment security and raise questions about the biological mechanisms responsible for the attachment effects we discerned.