918 resultados para Information Models
Resumo:
We present studies of 9 modern (up to 400-yr-old) peat sections from Slovenia, Switzerland, Austria, Italy, and Finland. Precise radiocarbon dating of modern samples is possible due to the large bomb peak of atmospheric 14C concentration in 1963 and the following rapid decline in the 14C level. All the analyzed 14C profiles appeared concordant with the shape of the bomb peak of atmospheric 14C concentration, integrated over some time interval with a length specific to the peat section. In the peat layers covered by the bomb peak, calendar ages of individual peat samples could be determined almost immediately, with an accuracy of 23 yr. In the pre-bomb sections, the calendar ages of individual dated samples are determined in the form of multi-modal probability distributions of about 300 yr wide (about AD 16501950). However, simultaneous use of the post-bomb and pre-bomb 14C dates, and lithological information, enabled the rejection of most modes of probability distributions in the pre-bomb section. In effect, precise age-depth models of the post-bomb sections have been extended back in time, into the wiggly part of the 14C calibration curve.
Resumo:
Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.
Resumo:
The goal of the present thesis was to investigate the production of code-switched utterances in bilinguals’ speech production. This study investigates the availability of grammatical-category information during bilingual language processing. The specific aim is to examine the processes involved in the production of Persian-English bilingual compound verbs (BCVs). A bilingual compound verb is formed when the nominal constituent of a compound verb is replaced by an item from the other language. In the present cases of BCVs the nominal constituents are replaced by a verb from the other language. The main question addressed is how a lexical element corresponding to a verb node can be placed in a slot that corresponds to a noun lemma. This study also investigates how the production of BCVs might be captured within a model of BCVs and how such a model may be integrated within incremental network models of speech production. In the present study, both naturalistic and experimental data were used to investigate the processes involved in the production of BCVs. In the first part of the present study, I collected 2298 minutes of a popular Iranian TV program and found 962 code-switched utterances. In 83 (8%) of the switched cases, insertions occurred within the Persian compound verb structure, hence, resulting in BCVs. As to the second part of my work, a picture-word interference experiment was conducted. This study addressed whether in the case of the production of Persian-English BCVs, English verbs compete with the corresponding Persian compound verbs as a whole, or whether English verbs compete with the nominal constituents of Persian compound verbs only. Persian-English bilinguals named pictures depicting actions in 4 conditions in Persian (L1). In condition 1, participants named pictures of action using the whole Persian compound verb in the context of its English equivalent distractor verb. In condition 2, only the nominal constituent was produced in the presence of the light verb of the target Persian compound verb and in the context of a semantically closely related English distractor verb. In condition 3, the whole Persian compound verb was produced in the context of a semantically unrelated English distractor verb. In condition 4, only the nominal constituent was produced in the presence of the light verb of the target Persian compound verb and in the context of a semantically unrelated English distractor verb. The main effect of linguistic unit was significant by participants and items. Naming latencies were longer in the nominal linguistic unit compared to the compound verb (CV) linguistic unit. That is, participants were slower to produce the nominal constituent of compound verbs in the context of a semantically closely related English distractor verb compared to producing the whole compound verbs in the context of a semantically closely related English distractor verb. The three-way interaction between version of the experiment (CV and nominal versions), linguistic unit (nominal and CV linguistic units), and relation (semantically related and unrelated distractor words) was significant by participants. In both versions, naming latencies were longer in the semantically related nominal linguistic unit compared to the response latencies in the semantically related CV linguistic unit. In both versions, naming latencies were longer in the semantically related nominal linguistic unit compared to response latencies in the semantically unrelated nominal linguistic unit. Both the analysis of the naturalistic data and the results of the experiment revealed that in the case of the production of the nominal constituent of BCVs, a verb from the other language may compete with a noun from the base language, suggesting that grammatical category does not necessarily provide a constraint on lexical access during the production of the nominal constituent of BCVs. There was a minimal context in condition 2 (the nominal linguistic unit) in which the nominal constituent was produced in the presence of its corresponding light verb. The results suggest that generating words within a context may not guarantee that the effect of grammatical class becomes available. A model is proposed in order to characterize the processes involved in the production of BCVs. Implications for models of bilingual language production are discussed.
Resumo:
The study of operations on representations of objects is well documented in the realm of spatial engineering. However, the mathematical structure and formal proof of these operational phenomena are not thoroughly explored. Other works have often focused on query-based models that seek to order classes and instances of objects in the form of semantic hierarchies or graphs. In some models, nodes of graphs represent objects and are connected by edges that represent different types of coarsening operators. This work, however, studies how the coarsening operator "simplification" can manipulate partitions of finite sets, independent from objects and their attributes. Partitions that are "simplified first have a collection of elements filtered (removed), and then the remaining partition is amalgamated (some sub-collections are unified). Simplification has many interesting mathematical properties. A finite composition of simplifications can also be accomplished with some single simplification. Also, if one partition is a simplification of the other, the simplified partition is defined to be less than the other partition according to the simp relation. This relation is shown to be a partial-order relation based on simplification. Collections of partitions can not only be proven to have a partial- order structure, but also have a lattice structure and are complete. In regard to a geographic information system (GIs), partitions related to subsets of attribute domains for objects are called views. Objects belong to different views based whether or not their attribute values lie in the underlying view domain. Given a particular view, objects with their attribute n-tuple codings contained in the view are part of the actualization set on views, and objects are labeled according to the particular subset of the view in which their coding lies. Though the scope of the work does not mainly focus on queries related directly to geographic objects, it provides verification for the existence of particular views in a system with this underlying structure. Given a finite attribute domain, one can say with mathematical certainty that different views of objects are partially ordered by simplification, and every collection of views has a greatest lower bound and least upper bound, which provides the validity for exploring queries in this regard.
Resumo:
Early Employee Assistance Programs (EAPs) had their origin in humanitarian motives, and there was little concern for their cost/benefit ratios; however, as some programs began accumulating data and analyzing it over time, even with single variables such as absenteeism, it became apparent that the humanitarian reasons for a program could be reinforced by cost savings particularly when the existence of the program was subject to justification.^ Today there is general agreement that cost/benefit analyses of EAPs are desirable, but the specific models for such analyses, particularly those making use of sophisticated but simple computer based data management systems, are few.^ The purpose of this research and development project was to develop a method, a design, and a prototype for gathering managing and presenting information about EAPS. This scheme provides information retrieval and analyses relevant to such aspects of EAP operations as: (1) EAP personnel activities, (2) Supervisory training effectiveness, (3) Client population demographics, (4) Assessment and Referral Effectiveness, (5) Treatment network efficacy, (6) Economic worth of the EAP.^ This scheme has been implemented and made operational at The University of Texas Employee Assistance Programs for more than three years.^ Application of the scheme in the various programs has defined certain variables which remained necessary in all programs. Depending on the degree of aggressiveness for data acquisition maintained by program personnel, other program specific variables are also defined. ^
Resumo:
Microarray technology is a high-throughput method for genotyping and gene expression profiling. Limited sensitivity and specificity are one of the essential problems for this technology. Most of existing methods of microarray data analysis have an apparent limitation for they merely deal with the numerical part of microarray data and have made little use of gene sequence information. Because it's the gene sequences that precisely define the physical objects being measured by a microarray, it is natural to make the gene sequences an essential part of the data analysis. This dissertation focused on the development of free energy models to integrate sequence information in microarray data analysis. The models were used to characterize the mechanism of hybridization on microarrays and enhance sensitivity and specificity of microarray measurements. ^ Cross-hybridization is a major obstacle factor for the sensitivity and specificity of microarray measurements. In this dissertation, we evaluated the scope of cross-hybridization problem on short-oligo microarrays. The results showed that cross hybridization on arrays is mostly caused by oligo fragments with a run of 10 to 16 nucleotides complementary to the probes. Furthermore, a free-energy based model was proposed to quantify the amount of cross-hybridization signal on each probe. This model treats cross-hybridization as an integral effect of the interactions between a probe and various off-target oligo fragments. Using public spike-in datasets, the model showed high accuracy in predicting the cross-hybridization signals on those probes whose intended targets are absent in the sample. ^ Several prospective models were proposed to improve Positional Dependent Nearest-Neighbor (PDNN) model for better quantification of gene expression and cross-hybridization. ^ The problem addressed in this dissertation is fundamental to the microarray technology. We expect that this study will help us to understand the detailed mechanism that determines sensitivity and specificity on the microarrays. Consequently, this research will have a wide impact on how microarrays are designed and how the data are interpreted. ^
Resumo:
Background. Childhood immunization programs have dramatically reduced the morbidity and mortality associated with vaccine-preventable diseases. Proper documentation of immunizations that have been administered is essential to prevent duplicate immunization of children. To help improve documentation, immunization information systems (IISs) have been developed. IISs are comprehensive repositories of immunization information for children residing within a geographic region. The two models for participation in an IIS are voluntary inclusion, or "opt-in," and voluntary exclusion, or "opt-out." In an opt-in system, consent must be obtained for each participant, conversely, in an opt-out IIS, all children are included unless procedures to exclude the child are completed. Consent requirements for participation vary by state; the Texas IIS, ImmTrac, is an opt-in system.^ Objectives. The specific objectives are to: (1) Evaluate the variance among the time and costs associated with collecting ImmTrac consent at public and private birthing hospitals in the Greater Houston area; (2) Estimate the total costs associated with collecting ImmTrac consent at selected public and private birthing hospitals in the Greater Houston area; (3) Describe the alternative opt-out process for collecting ImmTrac consent at birth and discuss the associated cost savings relative to an opt-in system.^ Methods. Existing time-motion studies (n=281) conducted between October, 2006 and August, 2007 at 8 birthing hospitals in the Greater Houston area were used to assess the time and costs associated with obtaining ImmTrac consent at birth. All data analyzed are deidentified and contain no personal information. Variations in time and costs at each location were assessed and total costs per child and costs per year were estimated. The cost of an alternative opt-out system was also calculated.^ Results. The median time required by birth registrars to complete consent procedures varied from 72-285 seconds per child. The annual costs associated with obtaining consent for 388,285 newborns in ImmTrac's opt-in consent process were estimated at $702,000. The corresponding costs of the proposed opt-out system were estimated to total $194,000 per year. ^ Conclusions. Substantial variation in the time and costs associated with completion of ImmTrac consent procedures were observed. Changing to an opt-out system for participation could represent significant cost savings. ^
Resumo:
The three articles that comprise this dissertation describe how small area estimation and geographic information systems (GIS) technologies can be integrated to provide useful information about the number of uninsured and where they are located. Comprehensive data about the numbers and characteristics of the uninsured are typically only available from surveys. Utilization and administrative data are poor proxies from which to develop this information. Those who cannot access services are unlikely to be fully captured, either by health care provider utilization data or by state and local administrative data. In the absence of direct measures, a well-developed estimation of the local uninsured count or rate can prove valuable when assessing the unmet health service needs of this population. However, the fact that these are “estimates” increases the chances that results will be rejected or, at best, treated with suspicion. The visual impact and spatial analysis capabilities afforded by geographic information systems (GIS) technology can strengthen the likelihood of acceptance of area estimates by those most likely to benefit from the information, including health planners and policy makers. ^ The first article describes how uninsured estimates are currently being performed in the Houston metropolitan region. It details the synthetic model used to calculate numbers and percentages of uninsured, and how the resulting estimates are integrated into a GIS. The second article compares the estimation method of the first article with one currently used by the Texas State Data Center to estimate numbers of uninsured for all Texas counties. Estimates are developed for census tracts in Harris County, using both models with the same data sets. The results are statistically compared. The third article describes a new, revised synthetic method that is being tested to provide uninsured estimates at sub-county levels for eight counties in the Houston metropolitan area. It is being designed to replicate the same categorical results provided by a current U.S. Census Bureau estimation method. The estimates calculated by this revised model are compared to the most recent U.S. Census Bureau estimates, using the same areas and population categories. ^
Resumo:
Additive and multiplicative models of relative risk were used to measure the effect of cancer misclassification and DS86 random errors on lifetime risk projections in the Life Span Study (LSS) of Hiroshima and Nagasaki atomic bomb survivors. The true number of cancer deaths in each stratum of the cancer mortality cross-classification was estimated using sufficient statistics from the EM algorithm. Average survivor doses in the strata were corrected for DS86 random error ($\sigma$ = 0.45) by use of reduction factors. Poisson regression was used to model the corrected and uncorrected mortality rates with covariates for age at-time-of-bombing, age at-time-of-death and gender. Excess risks were in good agreement with risks in RERF Report 11 (Part 2) and the BEIR-V report. Bias due to DS86 random error typically ranged from $-$15% to $-$30% for both sexes, and all sites and models. The total bias, including diagnostic misclassification, of excess risk of nonleukemia for exposure to 1 Sv from age 18 to 65 under the non-constant relative projection model was $-$37.1% for males and $-$23.3% for females. Total excess risks of leukemia under the relative projection model were biased $-$27.1% for males and $-$43.4% for females. Thus, nonleukemia risks for 1 Sv from ages 18 to 85 (DRREF = 2) increased from 1.91%/Sv to 2.68%/Sv among males and from 3.23%/Sv to 4.02%/Sv among females. Leukemia excess risks increased from 0.87%/Sv to 1.10%/Sv among males and from 0.73%/Sv to 1.04%/Sv among females. Bias was dependent on the gender, site, correction method, exposure profile and projection model considered. Future studies that use LSS data for U.S. nuclear workers may be downwardly biased if lifetime risk projections are not adjusted for random and systematic errors. (Supported by U.S. NRC Grant NRC-04-091-02.) ^
Resumo:
In light of the new healthcare regulations, hospitals are increasingly reevaluating their IT integration strategies to meet expanded healthcare information exchange requirements. Nevertheless, hospital executives do not have all the information they need to differentiate between the available strategies and recognize what may better fit their organizational needs. ^ In the interest of providing the desired information, this study explored the relationships between hospital financial performance, integration strategy selection, and strategy change. The integration strategies examined – applied as binary logistic regression dependent variables and in the order from most to least integrated – were Single-Vendor (SV), Best-of-Suite (BoS), and Best-of-Breed (BoB). In addition, the financial measurements adopted as independent variables for the models were two administrative labor efficiency and six industry standard financial ratios designed to provide a broad proxy of hospital financial performance. Furthermore, descriptive statistical analyses were carried out to evaluate recent trends in hospital integration strategy change. Overall six research questions were proposed for this study. ^ The first research question sought to answer if financial performance was related to the selection of integration strategies. The next questions, however, explored whether hospitals were more likely to change strategies or remain the same when there was no external stimulus to change, and if they did change, they would prefer strategies closer to the existing ones. These were followed by a question that inquired if financial performance was also related to strategy change. Nevertheless, rounding up the questions, the last two probed if the new Health Information Technology for Economic and Clinical Health (HITECH) Act had any impact on the frequency and direction of strategy change. ^ The results confirmed that financial performance is related to both IT integration strategy selection and strategy change, while concurred with prior studies that suggested hospital and environmental characteristics are associated factors as well. Specifically this study noted that the most integrated SV strategy is related to increased administrative labor efficiency and the hybrid BoS strategy is associated with improved financial health (based on operating margin and equity financing ratios). On the other hand, no financial indicators were found to be related to the least integrated BoB strategy, except for short-term liquidity (current ratio) when involving strategy change. ^ Ultimately, this study concluded that when making IT integration strategy decisions hospitals closely follow the resource dependence view of minimizing uncertainty. As each integration strategy may favor certain organizational characteristics, hospitals traditionally preferred not to make strategy changes and when they did, they selected strategies that were more closely related to the existing ones. However, as new regulations further heighten revenue uncertainty while require increased information integration, moving forward, as evidence already suggests a growing trend of organizations shifting towards more integrated strategies, hospitals may be more limited in their strategy selection choices.^
Resumo:
artículo publicado en la revista Int Fam Plan Perspect. 2003 Sep;29(3):112-20
Resumo:
We introduce two probabilistic, data-driven models that predict a ship's speed and the situations where a ship is probable to get stuck in ice based on the joint effect of ice features such as the thickness and concentration of level ice, ice ridges, rafted ice, moreover ice compression is considered. To develop the models to datasets were utilized. First, the data from the Automatic Identification System about the performance of a selected ship was used. Second, a numerical ice model HELMI, developed in the Finnish Meteorological Institute, provided information about the ice field. The relations between the ice conditions and ship movements were established using Bayesian learning algorithms. The case study presented in this paper considers a single and unassisted trip of an ice-strengthened bulk carrier between two Finnish ports in the presence of challenging ice conditions, which varied in time and space. The obtained results show good prediction power of the models. This means, on average 80% for predicting the ship's speed within specified bins, and above 90% for predicting cases where a ship may get stuck in ice. We expect this new approach to facilitate the safe and effective route selection problem for ice-covered waters where the ship performance is reflected in the objective function.
Resumo:
Fluid flow through the axial hydrothermal system at fast spreading ridges is investigated using the Sr-isotopic composition of upper crustal samples recovered from a tectonic window at Pito Deep (NE Easter microplate). Samples from the sheeted dike complex collected away from macroscopic evidence of channelized fluid flow, such as faults and centimeter-scale hydrothermal veins, show a range of 87Sr/86Sr from 0.7025 to 0.7030 averaging 0.70276 relative to a protolith with 87Sr/86Sr of ~0.7024. There is no systematic variation in 87Sr/86Sr with depth in the sheeted dike complex. Comparison of these new data with the two other localities that similar data sets exist for (ODP Hole 504B and the Hess Deep tectonic window) reveals that the extent of Sr-isotope exchange is similar in all of these locations. Models that assume that fluid-rock reaction occurs during one-dimensional (recharge) flow lead to significant decreases in the predicted extent of isotopic modification of the rock with depth in the crust. These model results show systematic misfits when compared with the data that can only be avoided if the fluid flow is assumed to be focused in isolated channels with very slow fluid-rock exchange. In this scenario the fluid at the base of the crust is little modified in 87Sr/86Sr from seawater and thus unlike vent fluids. Additionally, this model predicts that some rocks should show no change from the fresh-rock 87Sr/86Sr, but this is not observed. Alternatively, models in which fluid-rock reaction occurs during upflow (discharge) as well as downflow, or in which fluids are recirculated within the hydrothermal system, can reproduce the observed lack of variation in 87Sr/86Sr with depth in the crust. Minimum time-integrated fluid fluxes, calculated from mass balance, are between 1.5 and 2.6 * 10**6 kg/m**2 for all areas studied to date. However, new evidence from both the rocks and a compilation of vent fluid compositions demonstrates that some Sr is leached from the crust. Because this leaching lowers the fluid 87Sr/86Sr without changing the rock 87Sr/86Sr, these mass balance models must underestimate the time-integrated fluid flux. Additionally, these values do not account for fluid flow that is channelized within the crust.
Resumo:
Studies on the impact of historical, current and future global change require very high-resolution climate data (less or equal 1km) as a basis for modelled responses, meaning that data from digital climate models generally require substantial rescaling. Another shortcoming of available datasets on past climate is that the effects of sea level rise and fall are not considered. Without such information, the study of glacial refugia or early Holocene plant and animal migration are incomplete if not impossible. Sea level at the last glacial maximum (LGM) was approximately 125m lower, creating substantial additional terrestrial area for which no current baseline data exist. Here, we introduce the development of a novel, gridded climate dataset for LGM that is both very high resolution (1km) and extends to the LGM sea and land mask. We developed two methods to extend current terrestrial precipitation and temperature data to areas between the current and LGM coastlines. The absolute interpolation error is less than 1°C and 0.5 °C for 98.9% and 87.8% of all pixels for the first two 1 arc degree distance zones. We use the change factor method with these newly assembled baseline data to downscale five global circulation models of LGM climate to a resolution of 1km for Europe. As additional variables we calculate 19 'bioclimatic' variables, which are often used in climate change impact studies on biological diversity. The new LGM climate maps are well suited for analysing refugia and migration during Holocene warming following the LGM.