993 resultados para Religions (Proposed, universal, etc.)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite universal access entitlements to the public healthcare system in Ireland, over half the population is covered by voluntary private health insurance. The market operates on the basis of community rating, open enrolment and lifetime cover. A set of minimum benefits also exists, and two risk equalisation schemes have been put in place but neither was implemented. These schemes have proved highly controversial. To date, the debate has primarily consisted of qualitative arguments. This study adds a quantitative element by analysing a number of pertinent issues. A model of a community rated insurance market is developed, which shows that community rating can only be maintained in a competitive market if all insurers in the market have the same risk profile as the market overall. This has relevance to the Irish market in the aftermath of a Supreme Court decision to set aside risk equalisation. Two reasons why insurers’ risk profiles might differ are adverse selection and risk selection. Evidence is found of the existence of both forms of selection in the Irish market. A move from single rate community rating to lifetime community rating in Australia had significant consequences for take-up rates and the age profile of the insured population. A similar move has been proposed in Ireland. It is found that, although this might improve the stability of community rating in the short term, it would not negate the need for risk equalisation. If community rating were to collapse then risk rating might result. A comparison of the Irish, Australian and UK health insurance markets suggests that community rating encourages higher take-up among older consumers than risk rating. Analysis of Irish hospital discharge figures suggests that this yields significant savings for the Irish public healthcare system. This thesis has implications for government policy towards private health insurance in Ireland.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is concentrated on the historical aspects of the elitist field sports of deer stalking and game shooting, as practiced by four Irish landed ascendancy families in the south west of Ireland. Four great estates were selected for study. Two of these were, by Irish standards, very large: the Kenmare estate of over 136,000 acres in the ownership of the Roman Catholic Earls of Kenmare, and the Herbert estate of over 44,000 acres in the ownership of the Protestant Herbert family. The other two were, in relative terms, small: the Grehan estate of c.7,500 acres in the ownership of the Roman Catholic Grehan family, and the Godfrey estate of c.5,000 acres, in the ownership of the Protestant Barons Godfrey. This mixture of contrasting estate size, owner's religions, nobleman, minor aristocrat and untitled gentry should, it is argued, yield a diversity of the field sports and lifestyles of their owners, and go some way to assess the contributions, good or bad, they have bequeathed to modern Ireland. Equally, it should help in assessing what importance, if any, applied to hunting. In this context, hunting is here used in its broadest meaning, and includes deer stalking and game shooting, as well as hunting with dogs and hounds on foot and horseback. Where a specific type of hunting is involved, it is so described; for example, fox hunting, stag hunting, hare hunting. Similarly, the term game is sometimes used in sporting literature to encompass all species of quarry killed, and can include deer, ground game (hares and rabbits), waterfowl, and various species of game birds. Where it refers to specific species, these are so described; for example grouse, pheasants, woodcork, wild duck, etc. Since two of these estates - the Kenmare and Herbert - each created a deer forest, unique in mid-19th century Ireland, they form the core study estates; the two smaller estates serve as comparative studies. And, equally unique, as these two larger estates held the only remnant population of native Irish red deer, the survival of that herd itself forms a concomitant core area of analysis. The numerary descriptions applied to these animals in popular literature are critically reassessed against prime source historical evidence, as are the so-called deer forest 'clearances'. The core period, 1840 to 1970, is selected as the seminal period, spanning 130 years, from the creation of the deer forests to when a fundamental change in policy and administration was introduced by the state. Comparison is made with similar estates elsewhere, in Britain and especially in Scotland. Their influence on the Irish methods and style of hunting is historically examined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semiconductor nanowires, particularly group 14 semiconductor nanowires, have been the subject of intensive research in the recent past. They have been demonstrated to provide an effective, versatile route towards the continued miniaturisation and improvement of microelectronics. This thesis aims to highlight some novel ways of fabricating and controlling various aspects of the growth of Si and Ge nanowires. Chapter 1 highlights the primary technique used for the growth of nanowires in this study, namely, supercritical fluid (SCF) growth reactions. The advantages (and disadvantages) of this technique for the growth of Si and Ge nanowires are highlighted, citing numerous examples from the past ten years. The many variables involved in this technique are discussed along with the resultant characteristics of nanowires produced (diameter, doping, orientation etc.). Chapter 2 outlines the experimental methodologies used in this thesis. The analytical techniques used for the structural characterisation of nanowires produced are also described as well as the techniques used for the chemical analysis of various surface terminations. Chapter 3 describes the controlled self-seeded growth of highly crystalline Ge nanowires, in the absence of conventional metal seed catalysts, using a variety of oligosilylgermane precursors and mixtures of germane and silane compounds. A model is presented which describes the main stages of self-seeded Ge nanowire growth (nucleation, coalescence and Ostwald ripening) from the oligosilylgermane precursors and in conjunction with TEM analysis, a mechanism of growth is proposed. Chapter 4 introduces the metal assisted etching (MAE) of Si substrates to produce Si nanowires. A single step metal-assisted etch (MAE) process, utilising metal ion-containing HF solutions in the absence of an external oxidant, was developed to generate heterostructured Si nanowires with controllable porous (isotropically etched) and non-porous (anisotropically etched) segments. In Chapter 5 the bottom-up growth of Ge nanowires, similar to that described in Chapter 3, and the top down etching of Si, described in Chapter 4, are combined. The introduction of a MAE processing step in order to “sink” the Ag seeds into the growth substrate, prior to nanowire growth, is shown to dramatically decrease the mean nanowire diameters and to narrow the diameter distributions. Finally, in Chapter 6, the biotin – streptavidin interaction was explored for the purposes of developing a novel Si junctionless nanowire transistor (JNT) sensor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A wireless sensor network can become partitioned due to node failure, requiring the deployment of additional relay nodes in order to restore network connectivity. This introduces an optimisation problem involving a tradeoff between the number of additional nodes that are required and the costs of moving through the sensor field for the purpose of node placement. This tradeoff is application-dependent, influenced for example by the relative urgency of network restoration. In addition, minimising the number of relay nodes might lead to long routing paths to the sink, which may cause problems of data latency. This data latency is extremely important in wireless sensor network applications such as battlefield surveillance, intrusion detection, disaster rescue, highway traffic coordination, etc. where they must not violate the real-time constraints. Therefore, we also consider the problem of deploying multiple sinks in order to improve the network performance. Previous research has only parts of this problem in isolation, and has not properly considered the problems of moving through a constrained environment or discovering changes to that environment during the repair or network quality after the restoration. In this thesis, we firstly consider a base problem in which we assume the exploration tasks have already been completed, and so our aim is to optimise our use of resources in the static fully observed problem. In the real world, we would not know the radio and physical environments after damage, and this creates a dynamic problem where damage must be discovered. Therefore, we extend to the dynamic problem in which the network repair problem considers both exploration and restoration. We then add a hop-count constraint for network quality in which the desired locations can talk to a sink within a hop count limit after the network is restored. For each new problem of the network repair, we have proposed different solutions (heuristics and/or complete algorithms) which prioritise different objectives. We evaluate our solutions based on simulation, assessing the quality of solutions (node cost, movement cost, computation time, and total restoration time) by varying the problem types and the capability of the agent that makes the repair. We show that the relative importance of the objectives influences the choice of algorithm, and different speeds of movement for the repairing agent have a significant impact on performance, and must be taken into account when selecting the algorithm. In particular, the node-based approaches are the best in the node cost, and the path-based approaches are the best in the mobility cost. For the total restoration time, the node-based approaches are the best with a fast moving agent while the path-based approaches are the best with a slow moving agent. For a medium speed moving agent, the total restoration time of the node-based approaches and that of the path-based approaches are almost balanced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The wonder of the last century has been the rapid development in technology. One of the sectors that it has touched immensely is the electronic industry. There has been exponential development in the field and scientists are pushing new horizons. There is an increased dependence in technology for every individual from different strata in the society. Atomic Layer Deposition (ALD) is a unique technique for growing thin films. It is widely used in the semiconductor industry. Films as thin as few nanometers can be deposited using this technique. Although this process has been explored for a variety of oxides, sulphides and nitrides, a proper method for deposition of many metals is missing. Metals are often used in the semiconductor industry and hence are of significant importance. A deficiency in understanding the basic chemistry at the nanoscale for possible reactions has delayed the improvement in metal ALD. In this thesis, we study the intrinsic chemistry involved for Cu ALD. This work reports computational study using Density Functional Theory as implemented in TURBOMOLE program. Both the gas phase and surface reactions are studied in most of the cases. The merits and demerits of a promising transmetallation reaction have been evaluated at the beginning of the study. Further improvements in the structure of precursors and coreagent have been proposed. This has led to the proposal of metallocenes as co-reagents and Cu(I) carbene compounds as new set of precursors. A three step process for Cu ALD that generates ligand free Cu layer after every ALD pulse has also been studied. Although the chemistry has been studied under the umbrella of Cu ALD the basic principles hold true for ALD of other metals (e.g. Co, Ni, Fe ) and also for other branches of science like thin film deposition other than ALD, electrochemical reactions, etc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes a methodology for detecting anomalies from sequentially observed and potentially noisy data. The proposed approach consists of two main elements: 1) filtering, or assigning a belief or likelihood to each successive measurement based upon our ability to predict it from previous noisy observations and 2) hedging, or flagging potential anomalies by comparing the current belief against a time-varying and data-adaptive threshold. The threshold is adjusted based on the available feedback from an end user. Our algorithms, which combine universal prediction with recent work on online convex programming, do not require computing posterior distributions given all current observations and involve simple primal-dual parameter updates. At the heart of the proposed approach lie exponential-family models which can be used in a wide variety of contexts and applications, and which yield methods that achieve sublinear per-round regret against both static and slowly varying product distributions with marginals drawn from the same exponential family. Moreover, the regret against static distributions coincides with the minimax value of the corresponding online strongly convex game. We also prove bounds on the number of mistakes made during the hedging step relative to the best offline choice of the threshold with access to all estimated beliefs and feedback signals. We validate the theory on synthetic data drawn from a time-varying distribution over binary vectors of high dimensionality, as well as on the Enron email dataset. © 1963-2012 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Scale-invariant neuronal avalanches have been observed in cell cultures and slices as well as anesthetized and awake brains, suggesting that the brain operates near criticality, i.e. within a narrow margin between avalanche propagation and extinction. In theory, criticality provides many desirable features for the behaving brain, optimizing computational capabilities, information transmission, sensitivity to sensory stimuli and size of memory repertoires. However, a thorough characterization of neuronal avalanches in freely-behaving (FB) animals is still missing, thus raising doubts about their relevance for brain function. METHODOLOGY/PRINCIPAL FINDINGS: To address this issue, we employed chronically implanted multielectrode arrays (MEA) to record avalanches of action potentials (spikes) from the cerebral cortex and hippocampus of 14 rats, as they spontaneously traversed the wake-sleep cycle, explored novel objects or were subjected to anesthesia (AN). We then modeled spike avalanches to evaluate the impact of sparse MEA sampling on their statistics. We found that the size distribution of spike avalanches are well fit by lognormal distributions in FB animals, and by truncated power laws in the AN group. FB data surrogation markedly decreases the tail of the distribution, i.e. spike shuffling destroys the largest avalanches. The FB data are also characterized by multiple key features compatible with criticality in the temporal domain, such as 1/f spectra and long-term correlations as measured by detrended fluctuation analysis. These signatures are very stable across waking, slow-wave sleep and rapid-eye-movement sleep, but collapse during anesthesia. Likewise, waiting time distributions obey a single scaling function during all natural behavioral states, but not during anesthesia. Results are equivalent for neuronal ensembles recorded from visual and tactile areas of the cerebral cortex, as well as the hippocampus. CONCLUSIONS/SIGNIFICANCE: Altogether, the data provide a comprehensive link between behavior and brain criticality, revealing a unique scale-invariant regime of spike avalanches across all major behaviors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Fermi gas of atoms with resonant interactions is predicted to obey universal hydrodynamics, in which the shear viscosity and other transport coefficients are universal functions of the density and temperature. At low temperatures, the viscosity has a universal quantum scale ħ n, where n is the density and ħ is Planck's constant h divided by 2π, whereas at high temperatures the natural scale is p(T)(3)/ħ(2), where p(T) is the thermal momentum. We used breathing mode damping to measure the shear viscosity at low temperature. At high temperature T, we used anisotropic expansion of the cloud to find the viscosity, which exhibits precise T(3/2) scaling. In both experiments, universal hydrodynamic equations including friction and heating were used to extract the viscosity. We estimate the ratio of the shear viscosity to the entropy density and compare it with that of a perfect fluid.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Population introduction is an important tool for ecosystem restoration. However, before introductions should be conducted, it is important to evaluate the genetic, phenotypic and ecological suitability of possible replacement populations. Careful genetic analysis is particularly important if it is suspected that the extirpated population was unique or genetically divergent. On the island of Martha's Vineyard, Massachusetts, the introduction of greater prairie chickens (Tympanuchus cupido pinnatus) to replace the extinct heath hen (T. cupido cupido) is being considered as part of an ecosystem restoration project. Martha's Vineyard was home to the last remaining heath hen population until its extinction in 1932. We conducted this study to aid in determining the suitability of greater prairie chickens as a possible replacement for the heath hen. We examined mitochondrial control region sequences from extant populations of all prairie grouse species (Tympanuchus) and from museum skin heath hen specimens. Our data suggest that the Martha's Vineyard heath hen population represents a divergent mitochondrial lineage. This result is attributable either to a long period of geographical isolation from other prairie grouse populations or to a population bottleneck resulting from human disturbance. The mtDNA diagnosability of the heath hen contrasts with the network of mtDNA haplotypes of other prairie grouse (T. cupido attwateri, T. pallidicinctus and T. phasianellus), which do not form distinguishable mtDNA groupings. Our findings suggest that the Martha's Vineyard heath hen was more genetically isolated than are current populations of prairie grouse and place the emphasis for future research on examining prairie grouse adaptations to different habitat types to assess ecological exchangeability between heath hens and greater prairie chickens.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With an ever increasing number of people taking numerous medications, the need to safely administer drugs and limit unintended side effects has never been greater. Antidote control remains the most direct means to counteract acute side effects of drugs, but, unfortunately, it has been challenging and cost prohibitive to generate antidotes for most therapeutic agents. Here we describe the development of a set of antidote molecules that are capable of counteracting the effects of an entire class of therapeutic agents based upon aptamers. These universal antidotes exploit the fact that, when systemically administered, aptamers are the only free extracellular oligonucleotides found in circulation. We show that protein- and polymer-based molecules that capture oligonucleotides can reverse the activity of several aptamers in vitro and counteract aptamer activity in vivo. The availability of universal antidotes to control the activity of any aptamer suggests that aptamers may be a particularly safe class of therapeutics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

India has compelling need and keen aspirations for indigenous clinical research. Notwithstanding this need and previously reported growth the expected expansion of Indian clinical research has not materialized. We reviewed the scientific literature, lay press reports, and ClinicalTrials.gov data for information and commentary on projections, progress, and impediments associated with clinical trials in India. We also propose targeted solutions to identified challenges. The Indian clinical trial sector grew by (+) 20.3% CAGR (compound annual growth rate) between 2005 and 2010 and contracted by (-) 14.6% CAGR between 2010 and 2013. Phase-1 trials grew by (+) 43.5% CAGR from 2005-2013, phase-2 trials grew by (+) 19.8% CAGR from 2005-2009 and contracted by (-) 12.6% CAGR from 2009-2013, and phase-3 trials grew by (+) 13.0% CAGR from 2005-2010 and contracted by (-) 28.8% CAGR from 2010-2013. This was associated with a slowing of the regulatory approval process, increased media coverage and activist engagement, and accelerated development of regulatory guidelines and recuperative initiatives. We propose the following as potential targets for restorative interventions: Regulatory overhaul (leadership and enforcement of regulations, resolution of ambiguity in regulations, staffing, training, guidelines, and ethical principles [e.g., compensation]).Education and training of research professionals, clinicians, and regulators.Public awareness and empowerment. After a peak in 2009-2010, the clinical research sector in India appears to be experiencing a contraction. There are indications of challenges in regulatory enforcement of guidelines; training of clinical research professionals; and awareness, participation, partnership, and the general image amongst the non-professional media and public. Preventative and corrective principles and interventions are outlined with the goal of realizing the clinical research potential in India.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To maintain a strict balance between demand and supply in the US power systems, the Independent System Operators (ISOs) schedule power plants and determine electricity prices using a market clearing model. This model determines for each time period and power plant, the times of startup, shutdown, the amount of power production, and the provisioning of spinning and non-spinning power generation reserves, etc. Such a deterministic optimization model takes as input the characteristics of all the generating units such as their power generation installed capacity, ramp rates, minimum up and down time requirements, and marginal costs for production, as well as the forecast of intermittent energy such as wind and solar, along with the minimum reserve requirement of the whole system. This reserve requirement is determined based on the likelihood of outages on the supply side and on the levels of error forecasts in demand and intermittent generation. With increased installed capacity of intermittent renewable energy, determining the appropriate level of reserve requirements has become harder. Stochastic market clearing models have been proposed as an alternative to deterministic market clearing models. Rather than using a fixed reserve targets as an input, stochastic market clearing models take different scenarios of wind power into consideration and determine reserves schedule as output. Using a scaled version of the power generation system of PJM, a regional transmission organization (RTO) that coordinates the movement of wholesale electricity in all or parts of 13 states and the District of Columbia, and wind scenarios generated from BPA (Bonneville Power Administration) data, this paper explores a comparison of the performance between a stochastic and deterministic model in market clearing. The two models are compared in their ability to contribute to the affordability, reliability and sustainability of the electricity system, measured in terms of total operational costs, load shedding and air emissions. The process of building the models and running for tests indicate that a fair comparison is difficult to obtain due to the multi-dimensional performance metrics considered here, and the difficulty in setting up the parameters of the models in a way that does not advantage or disadvantage one modeling framework. Along these lines, this study explores the effect that model assumptions such as reserve requirements, value of lost load (VOLL) and wind spillage costs have on the comparison of the performance of stochastic vs deterministic market clearing models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or "quakes". We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects "tuned critical" behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.