888 resultados para Quantitative Dynamic General Equilibrium
Resumo:
In this Rapid Communication we demonstrate the applicability of an augmented Gibbs ensemble Monte Carlo approach for the phase behavior determination of model colloidal systems with short-ranged depletion attraction and long-ranged repulsion. This technique allows for a quantitative determination of the phase boundaries and ground states in such systems. We demonstrate that gelation may occur in systems of this type as the result of arrested microphase separation, even when the equilibrium state of the system is characterized by compact microphase structures.
Resumo:
How do infants learn word meanings? Research has established the impact of both parent and child behaviors on vocabulary development, however the processes and mechanisms underlying these relationships are still not fully understood. Much existing literature focuses on direct paths to word learning, demonstrating that parent speech and child gesture use are powerful predictors of later vocabulary. However, an additional body of research indicates that these relationships don’t always replicate, particularly when assessed in different populations, contexts, or developmental periods.
The current study examines the relationships between infant gesture, parent speech, and infant vocabulary over the course of the second year (10-22 months of age). Through the use of detailed coding of dyadic mother-child play interactions and a combination of quantitative and qualitative data analytic methods, the process of communicative development was explored. Findings reveal non-linear patterns of growth in both parent speech content and child gesture use. Analyses of contingency in dyadic interactions reveal that children are active contributors to communicative engagement through their use of gestures, shaping the type of input they receive from parents, which in turn influences child vocabulary acquisition. Recommendations for future studies and the use of nuanced methodologies to assess changes in the dynamic system of dyadic communication are discussed.
Resumo:
Background Attitudes held and cultural and religious beliefs of general nursing students towards individuals with mental health problems are key factors that contribute to the quality of care provided. Negative attitudes towards mental illness and to individuals with mental health problems are held by the general public as well as health professionals. Negative attitudes towards people with mental illness have been reported to be associated with low quality of care, poor access to health care services and feelings of exclusion. Furthermore, culture has been reported to play a significant role in shaping people’s attitudes, values, beliefs, and behaviours, but has been poorly investigated. Research has also found that religious beliefs and practices are associated with better recovery for individuals with mental illness and enhanced coping strategies and provide more meaning and purpose to thinking and actions. The literature indicated that both Ireland and Jordan lack baseline data of general nurses’ and general nursing students’ attitudes towards mental illness and associated cultural and religious beliefs. Aims: To measure general nursing students’ attitudes towards individuals with mental illness and their relationships to socio-demographic variables and cultural and religious beliefs. Method: A quantitative descriptive study was conducted (n=470). 185 students in Jordan and 285 students in Ireland participated, with a response rate of 86% and 73%, respectively. Data were collected using the Community Attitudes towards the Mentally Ill instrument and a Cultural and Religious Beliefs Scale to People with Mental Illness constructed by the author. Results: Irish students reported more positive attitudes yet did not have strong cultural and religious beliefs compared to students from Jordan. Country of origin, considering a career in mental health nursing, knowing somebody with mental illness and cultural and religious beliefs were the most significant variables associated with students’ attitudes towards people with mental illness. In addition, students living in urban areas reported more positive attitudes to people with mental illness compared to those living in rural areas.
Resumo:
Key life history traits such as breeding time and clutch size are frequently both heritable and under directional selection, yet many studies fail to document micro-evolutionary responses. One general explanation is that selection estimates are biased by the omission of correlated traits that have causal effects on fitness, but few valid tests of this exist. Here we show, using a quantitative genetic framework and six decades of life-history data on two free-living populations of great tits Parus major, that selection estimates for egg-laying date and clutch size are relatively unbiased. Predicted responses to selection based on the Robertson-Price Identity were similar to those based on the multivariate breeder’s equation, indicating that unmeasured covarying traits were not missing from the analysis. Changing patterns of phenotypic selection on these traits (for laying date, linked to climate change) therefore reflect changing selection on breeding values, and genetic constraints appear not to limit their independent evolution. Quantitative genetic analysis of correlational data from pedigreed populations can be a valuable complement to experimental approaches to help identify whether apparent associations between traits and fitness are biased by missing traits, and to parse the roles of direct versus indirect selection across a range of environments.
Resumo:
This thesis investigates the design of optimal tax systems in dynamic environments. The first essay characterizes the optimal tax system where wages depend on stochastic shocks and work experience. In addition to redistributive and efficiency motives, the taxation of inexperienced workers depends on a second-best requirement that encourages work experience, a social insurance motive and incentive effects. Calibrations using U.S. data yield higher expected optimal marginal income tax rates for experienced workers for most of the inexperienced workers. They confirm that the average marginal income tax rate increases (decreases) with age when shocks and work experience are substitutes (complements). Finally, more variability in experienced workers' earnings prospects leads to increasing tax rates since income taxation acts as a social insurance mechanism. In the second essay, the properties of an optimal tax system are investigated in a dynamic private information economy where labor market frictions create unemployment that destroys workers' human capital. A two-skill type model is considered where wages and employment are endogenous. I find that the optimal tax system distorts the first-period wages of all workers below their efficient levels which leads to more employment. The standard no-distortion-at-the-top result no longer holds due to the combination of private information and the destruction of human capital. I show this result analytically under the Maximin social welfare function and confirm it numerically for a general social welfare function. I also investigate the use of a training program and job creation subsidies. The final essay analyzes the optimal linear tax system when there is a population of individuals whose perceptions of savings are linked to their disposable income and their family background through family cultural transmission. Aside from the standard equity/efficiency trade-off, taxes account for the endogeneity of perceptions through two channels. First, taxing labor decreases income, which decreases the perception of savings through time. Second, taxation on savings corrects for the misperceptions of workers and thus savings and labor decisions. Numerical simulations confirm that behavioral issues push labor income taxes upward to finance saving subsidies. Government transfers to individuals are also decreased to finance those same subsidies.
Resumo:
Bridges are a critical part of North America’s transportation network that need to be assessed frequently to inform bridge management decision making. Visual inspections are usually implemented for this purpose, during which inspectors must observe and report any excess displacements or vibrations. Unfortunately, these visual inspections are subjective and often highly variable and so a monitoring technology that can provide quantitative measurements to supplement inspections is needed. Digital Image Correlation (DIC) is a novel monitoring technology that uses digital images to measure displacement fields without any contact with the bridge. In this research, DIC and accelerometers were used to investigate the dynamic response of a railway bridge reported to experience large lateral displacements. Displacements were estimated using accelerometer measurements and were compared to DIC measurements. It was shown that accelerometers can provide reasonable estimates of displacement for zero-mean lateral displacements. By comparing measurements in the girder and in the piers, it was shown that for the bridge monitored, the large lateral displacements originated in the steel casting bearings positioned above the piers, and not in the piers themselves. The use of DIC for evaluating the effectiveness of rehabilitation of the LaSalle Causeway lift bridge in Kingston, Ontario was also investigated. Vertical displacements were measured at midspan and at the lifting end of the bridge during a static test and under dynamic live loading. The bridge displacements were well within the operating limits, however a gap at the lifting end of the bridge was identified. Rehabilitation of the bridge was conducted and by comparing measurements before and after rehabilitation, it was shown that the gap was successfully closed. Finally, DIC was used to monitor the midspan vertical and lateral displacements in a monitoring campaign of five steel rail bridges. DIC was also used to evaluate the effectiveness of structural rehabilitation of the lateral bracing of a bridge. Simple finite element models are developed using DIC measurements of displacement. Several lessons learned throughout this monitoring campaign are discussed in the hope of aiding future researchers.
Resumo:
En este trabajo aplicamos a la red social Twitter un modelo de análisis del discurso político y mediático desarrollado en publicaciones previas, que permite hacer compatible el estudio de los datos discursivos con propuestas explicativas surgidas a propósito de la comunicación política (neurocomunicación) y de la comunicación digital (la red como quinto estado, convergencia, inteligencia colectiva). Asumimos que hay categorías del encuadre discursivo (frame) que pueden ser tratadas como indicadores de habilidades cognitivas y comunicativas. Analizamos estas categorías agrupándolas en tres dimensiones fundamentales: la intencional (ilocutividad del tuit, encuadre interpretativo de las etiquetas), referencial (temas, protagonistas), e interactiva (alineamiento estructural, predictibilidad; marcas de intertextualidad y dialogismo; afiliación partidista). El corpus consta de 4116 tuits: 3000 tuits pertenecientes a los programas Al Rojo Vivo (La Sexta: A3 Media), Las Mañanas Cuatro (Cuatro: Mediaset) y Los Desayunos de TVE (RTVE), 1116 tuits de seguidores de los programas, que corresponden a 45 tuits de cada programa. Los resultados confirman que el modelo permite establecer diferentes perfiles de subjetividad política en las cuentas de Twitter.
Resumo:
Major food adulteration and contamination events occur with alarming regularity and are known to be episodic, with the question being not if but when another large-scale food safety/integrity incident will occur. Indeed, the challenges of maintaining food security are now internationally recognised. The ever increasing scale and complexity of food supply networks can lead to them becoming significantly more vulnerable to fraud and contamination, and potentially dysfunctional. This can make the task of deciding which analytical methods are more suitable to collect and analyse (bio)chemical data within complex food supply chains, at targeted points of vulnerability, that much more challenging. It is evident that those working within and associated with the food industry are seeking rapid, user-friendly methods to detect food fraud and contamination, and rapid/high-throughput screening methods for the analysis of food in general. In addition to being robust and reproducible, these methods should be portable and ideally handheld and/or remote sensor devices, that can be taken to or be positioned on/at-line at points of vulnerability along complex food supply networks and require a minimum amount of background training to acquire information rich data rapidly (ergo point-and-shoot). Here we briefly discuss a range of spectrometry and spectroscopy based approaches, many of which are commercially available, as well as other methods currently under development. We discuss a future perspective of how this range of detection methods in the growing sensor portfolio, along with developments in computational and information sciences such as predictive computing and the Internet of Things, will together form systems- and technology-based approaches that significantly reduce the areas of vulnerability to food crime within food supply chains. As food fraud is a problem of systems and therefore requires systems level solutions and thinking.
Resumo:
Background: Potentially inappropriate prescribing (PIP) is common in older people in primary care, as evidenced by a significant body of quantitative research. However, relatively few qualitative studies have investigated the phenomenon of PIP and its underlying processes from the perspective of general practitioners (GPs). The aim of this paper is to explore qualitatively, GP perspectives regarding prescribing and PIP in older primary care patients.
Method: Semi-structured qualitative interviews were conducted with GPs participating in a randomised controlled trial (RCT) of an intervention to decrease PIP in older patients (≥70 years) in Ireland. Interviews were conducted with GP participants (both intervention and control) from the OPTI-SCRIPT cluster RCT as part of the trial process evaluation between January and July 2013. Interviews were conducted by one interviewer and audio recorded. Interviews were transcribed verbatim and a thematic analysis was conducted.
Results: Seventeen semi-structured interviews were conducted (13 male; 4 female). Three main, inter-related themes emerged (complex prescribing environment, paternalistic doctor-patient relationship, and relevance of PIP concept). Patient complexity (e.g. polypharmacy, multimorbidity), as well as prescriber complexity (e.g. multiple prescribers, poor communication, restricted autonomy) were all identified as factors contributing to a complex prescribing environment where PIP could occur, as was a paternalistic-doctor patient relationship. The concept of PIP was perceived to be of variable usefulness to GPs and the criteria to measure it may be at odds with the complex processes of prescribing for this patient population.
Conclusions: Several inter-related factors contributing to the occurrence of PIP were identified, some of which may be amenable to intervention. Improvement strategies focused on improved management of polypharmacy and multimorbidity, and communication across primary and secondary care could result in substantial improvements in PIP.
Resumo:
As one of the most successfully commercialized distributed energy resources, the long-term effects of microturbines (MTs) on the distribution network has not been fully investigated due to the complex thermo-fluid-mechanical energy conversion processes. This is further complicated by the fact that the parameter and internal data of MTs are not always available to the electric utility, due to different ownerships and confidentiality concerns. To address this issue, a general modeling approach for MTs is proposed in this paper, which allows for the long-term simulation of the distribution network with multiple MTs. First, the feasibility of deriving a simplified MT model for long-term dynamic analysis of the distribution network is discussed, based on the physical understanding of dynamic processes that occurred within MTs. Then a three-stage identification method is developed in order to obtain a piecewise MT model and predict electro-mechanical system behaviors with saturation. Next, assisted with the electric power flow calculation tool, a fast simulation methodology is proposed to evaluate the long-term impact of multiple MTs on the distribution network. Finally, the model is verified by using Capstone C30 microturbine experiments, and further applied to the dynamic simulation of a modified IEEE 37-node test feeder with promising results.
Resumo:
With the development of the Internet-of-Things, more and more IoT platforms come up with different structures and characteristics. Making balance of their advantages and disadvantages, we should choose the suitable platform in differ- ent scenarios. For this project, I make comparison of a cloud-based centralized platform, Microsoft Azure IoT hub and a fully distributed platform, Sensi- bleThings. Quantitative comparison is made for performance by 2 scenarios, messages sending speed adds up, devices lie in different location. General com- parison is made for security, utilization and the storage. Finally I draw the con- clusion that SensibleThings performs more stable when a lot of messages push- es to the platform. Microsoft Azure has better geographic expansion. For gener- al comparison, Microsoft Azure IoT hub has better security. The requirement of local device for Microsoft Azure IoT hub is lower than SensibleThings. The SensibleThings are open source and free while Microsoft Azure follow the con- cept “pay as you go” with many throttling limitations for different editions. Microsoft is more user-friendly.
Resumo:
Encryption of personal data is widely regarded as a privacy preserving technology which could potentially play a key role for the compliance of innovative IT technology within the European data protection law framework. Therefore, in this paper, we examine the new EU General Data Protection Regulation’s relevant provisions regarding encryption – such as those for anonymisation and pseudonymisation – and assess whether encryption can serve as an anonymisation technique, which can lead to the non-applicability of the GDPR. However, the provisions of the GDPR regarding the material scope of the Regulation still leave space for legal uncertainty when determining whether a data subject is identifiable or not. Therefore, we inter alia assess the Opinion of the Advocate General of the European Court of Justice (ECJ) regarding a preliminary ruling on the interpretation of the dispute concerning whether a dynamic IP address can be considered as personal data, which may put an end to the dispute whether an absolute or a relative approach has to be used for the assessment of the identifiability of data subjects. Furthermore, we outline the issue of whether the anonymisation process itself constitutes a further processing of personal data which needs to have a legal basis in the GDPR. Finally, we give an overview of relevant encryption techniques and examine their impact upon the GDPR’s material scope.
Resumo:
The objective of this paper is to perform a quantitative comparison of Dweet.io and SensibleThings from different aspects. With the fast development of internet of things, the platforms for internet-of-things face bigger challenges. This paper will evaluate both systems in four parts. The first part shows the general comparison of input ways and output functions provided by the platforms. The second part shows the security comparison, which focuses on the protocol types of the packets and the stability during the communication. The third part shows the scalability comparison when the value becomes bigger. The fourth part shows the scalability comparison when speeding up the processes. After the comparisons, I concluded that Dweet.io is more easy to use on devices and supports more programming languages. Dweet.io realizes visualization and it can be shared. Dweet.io is safer and more stable than SensibleThings. SensibleThings provides more openness. SensibleThings has better scalability in handling big values and quick speed.
Resumo:
PURPOSE: To quantitatively evaluate visual function 12 months after bilateral implantation of the Physiol FineVision® trifocal intraocular lens (IOL) and to compare these results with those obtained in the first postoperative month. METHODS: In this prospective case series, 20 eyes of 10 consecutive patients were included. Monocular and binocular, uncorrected and corrected visual acuities (distance, near, and intermediate) were measured. Metrovision® was used to test contrast sensitivity under static and dynamic conditions, both in photopic and low-mesopic settings. The same software was used for pupillometry and glare evaluation. Motion, achromatic, and chromatic contrast discrimination were tested using 2 innovative psychophysical tests. A complete ophthalmologic examination was performed preoperatively and at 1, 3, 6, and 12 months postoperatively. Psychophysical tests were performed 1 month after surgery and repeated 12 months postoperatively. RESULTS: Final distance uncorrected visual acuity (VA) was 0.00 ± 0.08 and distance corrected VA was 0.00 ± 0.05 logMAR. Distance corrected near VA was 0.00 ± 0.09 and distance corrected intermediate VA was 0.00 ± 0.06 logMAR. Glare testing, pupillometry, contrast sensitivity, motion, and chromatic and achromatic contrast discrimination did not differ significantly between the first and last visit (p>0.05) or when compared to an age-matched control group (p>0.05). CONCLUSIONS: The Physiol FineVision® trifocal IOL provided satisfactory full range of vision and quality of vision parameters 12 months after surgery. Visual acuity and psychophysical tests did not vary significantly between the first and last visit.
Resumo:
A history of specialties in economics since the late 1950s is constructed on the basis of a large corpus of documents from economics journals. The production of this history relies on a combination of algorithmic methods that avoid subjective assessments of the boundaries of specialties: bibliographic coupling, automated community detection in dynamic networks, and text mining. These methods uncover a structuring of economics around recognizable specialties with some significant changes over the period covered (1956–2014). Among our results, especially noteworthy are (1) the clear-cut existence of ten families of specialties, (2) the disappearance in the late 1970s of a specialty focused on general economic theory, (3) the dispersal of the econometrics-centered specialty in the early 1990s and the ensuing importance of specific econometric methods for the identity of many specialties since the 1990s, and (4) the low level of specialization of individual economists throughout the period in contrast to physicists as early as the late 1960s.