997 resultados para Single units


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single gold particles may serve as room temperature single electron memory units because of their size dependent electronic level spacing. Here, we present a proof-of-concept study by electrochemically controlled scanning probe experiments performed on tailor-made Au particles of narrow dispersity. In particular, the charge transport characteristics through chemically synthesized hexane-1-thiol and 4-pyridylbenzene-1-thiol mixed monolayer protected Au144 clusters (MPCs) by differential pulse voltammetry (DPV) and electrochemical scanning tunneling spectroscopy (EC-STS) are reported. The pyridyl groups exposed by the Au-MPCs enable their immobilization on Pt(111) substrates. By varying the humidity during their deposition, samples coated by stacks of compact monolayers of Au-MPCs or decorated with individual, laterally separated Au-MPCs are obtained. DPV experiments with stacked monolayers of Au144-MPCs and EC-STS experiments with laterally separated individual Au144-MPCs are performed both in aqueous and ionic liquid electrolytes. Lower capacitance values were observed for individual clusters compared to ensemble clusters. This trend remains the same irrespective of the composition of the electrolyte surrounding the Au144-MPC. However, the resolution of the energy level spacing of the single clusters is strongly affected by the proximity of neighboring particles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The antimicrobial activity of taurolidine was compared with minocycline against microbial species associated with periodontitis (four single strains and a 12-species mixture). Minimal inhibitory concentrations (MICs) and minimal bactericidal concentrations (MBCs), killing as well as activities on established and forming single-species biofilms and a 12-species biofilm were determined. The MICs of taurolidine against single species were always 0.31 mg/ml, the MBCs were 0.64 mg/ml. The used mixed microbiota was less sensitive to taurolidine, MIC and the MBC was 2.5 mg/ml. The strains and the mixture were completely killed by 2.5 mg/ml taurolidine, whereas 256 μg/ml minocycline reduced the bacterial counts of the mixture by 5 log10 colony forming units (cfu). Coating the surface with 10 mg/ml taurolidine or 256 μg/ml minocycline prevented completely biofilm formation of Porphyromonas gingivalis ATCC 33277 but not of Aggregatibacter actinomycetemcomitans Y4 and the mixture. On 4.5 d old biofilms, taurolidine acted concentration dependent with a reduction by 5 log10 cfu (P. gingivalis ATCC 33277) and 7 log10 cfu (A. actinomycetemcomitans Y4) when applying 10 mg/ml. Minocycline decreased the cfu counts by 1-2 log10 cfu independent of the used concentration. The reduction of the cfu counts in the 4.5 d old multi-species biofilms was about 3 log10 cfu after application of any minocycline concentration and after using 10 mg/ml taurolidine. Taurolidine is active against species associated with periodontitis, even within biofilms. Nevertheless a complete elimination of complex biofilms by taurolidine seems to be impossible and underlines the importance of a mechanical removal of biofilms prior to application of taurolidine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Air enema under fluoroscopy is a well-accepted procedure for the treatment of childhood intussusception. However, the reported radiation doses of pneumatic reduction with conventional fluoroscopy units have been high in decades past. OBJECTIVE To compare current radiation doses at our institution to past doses reported by others for fluoroscopic-guided pneumatic reduction of ileo-colic intussusception in children. MATERIALS AND METHODS Since 2007 radiologists and residents in our department who perform reduction of intussusceptions have received a radiation risk training. We retrospectively analyzed the data of 45 children (5 months-8 years) who underwent a total of 48 pneumatic reductions of ileo-colic intussusception between 2008 and 2012. We analyzed data for screening time and dose area product (DAP) and compared these data to those reported up to and including the year 2000. RESULTS Our mean screening time measured by the DAP-meter was 53.8 s (range 1-320 s, median 33.0 s). The mean DAP was 11.4 cGy ∙ cm(2) (range 1-145 cGy ∙ cm(2), median 5.45 cGy ∙ cm(2)). There was one bowel perforation, in a 1-year-old boy requiring surgical revision. Only three studies in the literature presented radiation exposure results on children who received pneumatic or hydrostatic reduction of intussusception under fluoroscopy. Screening times and dose area products in those studies, which were published in the 1990 s and in the year 2000, were substantially higher than those in our sample. CONCLUSION Low-frequency pulsed fluoroscopy and other dose-saving keys as well as the radiation risk training might have helped to improve the quality of the procedure in terms of radiation exposure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of medical and social developments have had an impact on the neonatal mortality over the past ten to 15 years in the United States. The purpose of this study was to examine one of these developments, Newborn Intensive Care Units (NICUs), and evaluate their impact on neonatal mortality in Houston, Texas.^ This study was unique in that it used as its data base matched birth and infant death records from two periods of time: 1958-1960 (before NICUs) and 1974-1976 (after NICUs). The neonatal mortality of single, live infants born to Houston resident mothers was compared for two groups: infants born in hospitals which developed NICUs and infants born in all other Houston hospitals. Neonatal mortality comparisons were made using the following birth-characteristic variables: birthweight, gestation, race, sex, maternal age, legitimacy, birth order and prenatal care.^ The results of the study showed that hospitals which developed NICUs had a higher percentage of their population with high risk characteristics. In spite of this, they had lower neonatal mortality rates in two categories: (1) white 3.5-5.5 pounds birthweight infants, (2) low birthweight infants whose mothers received no prenatal care. Black 3.5-5.5 pounds birthweight infants did equally well in either hospital group. While the differences between the two hospital groups for these categories were not statistically significant at the p < 0.05 level, data from the 1958-1960 period substantiate that a marked change occurred in the 3.5-5.5 pounds birthweight category for those infants born in hospitals which developed NICUs. Early data were not available for prenatal care. These findings support the conclusion that, in Houston, NICUs had some impact on neonatal mortality among moderately underweight infants. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IPOD Leg 49 recovered basalts from 9 holes at 7 sites along 3 transects across the Mid-Atlantic Ridge: 63°N (Reykjanes), 45°N and 36°N (FAMOUS area). This has provided further information on the nature of mantle heterogeneity in the North Atlantic by enabling studies to be made of the variation of basalt composition with depth and with time near critical areas (Iceland and the Azores) where deep mantle plumes are thought to exist. Over 150 samples have been analysed for up to 40 major and trace elements and the results used to place constraints on the petrogenesis of the erupted basalts and hence on the geochemical nature of their source regions. It is apparent that few of the recovered basalts have the geochemical characteristics of typical "depleted" midocean ridge basalts (MORB). An unusually wide range of basalt compositions may be erupted at a single site: the range of rare earth patterns within the short section cored at Site 413, for instance, encompasses the total variation of REE patterns previously reported from the FAMOUS area. Nevertheless it is possible to account for most of the compositional variation at a single site by partial melting processes (including dynamic melting) and fractional crystallization. Partial melting mechanisms seem to be the dominant processes relating basalt compositions, particularly at 36°N and 45°N, suggesting that long-lived sub-axial magma chambers may not be a consistent feature of the slow-spreading Mid-Atlantic Ridge. Comparisons of basalts erupted at the same ridge segment for periods of the order of 35 m.y. (now lying along the same mantle flow line) do show some significant inter-site differences in Rb/Sr, Ce/Yb, 87Sr/86Sr, etc., which cannot be accounted for by fractionation mechanisms and which must reflect heterogeneities in the mantle source. However when hygromagmatophile (HYG) trace element levels and ratios are considered, it is the constancy or consistency of these HYG ratios which is the more remarkable, implying that the mantle source feeding a particular ridge segment was uniform with respect to these elements for periods of the order of 35 m.y. and probably since the opening of the Atlantic. Yet these HYG element ratios at 63°N are very different from those at 45°N and 36°N and significantly different from the values at 22°N and in "MORB". The observed variations are difficult to reconcile with current concepts of mantle plumes and binary mixing models. The mantle is certainly heterogeneous, but there is not simply an "enriched" and a "depleted" source, but rather a range of sources heterogeneous on different scales for different elements - to an extent and volume depending on previous depletion/enrichment events. HYG element ratios offer the best method of defining compositionally different mantle segments since they are little modified by the fractionation processes associated with basalt generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metasediments in the three early Palaeozoic Ross orogenic terranes in northern Victoria Land and Oates Land (Antarctica) are geochemically classified as immature litharenites to wackes and moderately mature shales. Highly mature lithotypes with Chemical Index of Weathering values of >=95 are typically absent. Geochemical and Rb-Sr and Sm-Nd isotope results indicate that the turbiditic metasediments of the Cambro-Ordovician Robertson Bay Group in the eastern Robertson Bay Terrane represent a very homogeneous series lacking significant compositional variations. Major variations are only found in chemical parameters which reflect differences in degree of chemical weathering of their protoliths and in mechanical sorting of the detritus. Geochemical data, 87Sr/ 86Sr t=490 Ma ratios of 0.7120 - 0.7174, epsilonNd, t=490 Ma values of -7.6 to -10.3 and single-stage Nd-model ages of 1.7 - 1.9 Ga are indicative of an origin from a chemically evolved crustal source of on average late Palaeoproterozoic formation age. There is no evidence for significant sedimentary infill from primitive "ophiolitic" sources. Metasediments of the Middle Cambrian Molar Formation (Bowers Terrane) are compositionally strongly heterogeneous. Their major and trace element data and Sm-Nd isotope data (epsilonNd, t=500 Ma values of -14.3 to -1.2 and single-stage Nd-model ages of 1.7 - 2.1 Ga) can be explained by mixing of sedimentary input from an evolved crustal source of at least early Palaeoproterozoic formation age and from a primitive basaltic source. The chemical heterogeneity of metasediments from the Wilson Terrane is largely inherited from compositional variations of their precursor rocks as indicated by the Ni vs TiO2 diagram. Single-stage Nd-model ages of 1.6 -2.2 Ga for samples from more western inboard areas of the Wilson Terrane (epsilonNd, t=510 Ma -7.0 to -14.3) indicate a relatively high proportion of material derived from a crustal source with on average early Palaeoproterozoic formation age. Metasedimentary series in an eastern, more outboard position (epsilonNd, t=510 Ma -5.4 to -10.0; single-stage Nd model ages 1.4 - 1.9) on the contrary document stronger influence of a more primitive source with younger formation ages. The chemical and isotopic characteristics of metasediments from the Bowers and Wilson terranes can be explained by variable contributions from two contrasting sources: a cratonic continental crust similar to the Antarctic Shield exposed in Georg V Land and Terre Adélie some hundred kilometers west of the study area and a primitive basaltic source probably represented by the Cambrian island-arc of the Bowers Terrane. While the data for metasediments of the Robertson Bay Terrane are also compatible with an origin from an Antarctic-Shield-type source, there is no direct evidence from their geochemistry or isotope geochemistry for an island-arc component in these series.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The high integration density of current nanometer technologies allows the implementation of complex floating-point applications in a single FPGA. In this work the intrinsic complexity of floating-point operators is addressed targeting configurable devices and making design decisions providing the most suitable performance-standard compliance trade-offs. A set of floating-point libraries composed of adder/subtracter, multiplier, divisor, square root, exponential, logarithm and power function are presented. Each library has been designed taking into account special characteristics of current FPGAs, and with this purpose we have adapted the IEEE floating-point standard (software-oriented) to a custom FPGA-oriented format. Extended experimental results validate the design decisions made and prove the usefulness of reducing the format complexity

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We studied the coastal zone of the Tavoliere di Puglia plain, (Puglia region, southern Italy) with the aim to recognize the main unconformities, and therefore, the unconformity-bounded stratigraphic units (UBSUs; Salvador 1987, 1994) forming its Quaternary sedimentary fill. Recognizing unconformities is particularly problematic in an alluvial plain, due to the difficulties in distinguishing the unconformities that bound the UBSUs. So far, the recognition of UBSUs in buried successions has been made mostly by using seismic profiles. Instead, in our case, the unavailability of the latter has prompted us to address the problem by developing a methodological protocol consisting of the following steps: I) geological survey in the field; II) draft of a preliminary geological setting based on the field-survey results; III) dating of 102 samples coming from a large number of boreholes and some outcropping sections by means of the amino acid racemization (AAR) method applied to ostracod shells and 14C dating, filtering of the ages and the selection of valid ages; IV) correction of the preliminary geological setting in the light of the numerical ages; definition of the final geological setting with UBSUs; identification of a ‘‘hypothetical’’ or ‘‘attributed time range’’ (HTR or ATR) for each UBSU, the former very wide and subject to a subsequent modification, the latter definitive; V) cross-checking between the numerical ages and/or other characteristics of the sedimentary bodies and/or the sea-level curves (with their effects on the sedimentary processes) in order to restrict also the hypothetical time ranges in the attributed time ranges. The successful application of AAR geochronology to ostracod shells relies on the fact that the ability of ostracods to colonize almost all environments constitutes a tool for correlation, and also allow the inclusion in the same unit of coeval sediments that differ lithologically and paleoenvironmentally. The treatment of the numerical ages obtained using the AAR method required special attention. The first filtering step was made by the laboratory (rejection criteria a and b). Then, the second filtering step was made by testing in the field the remaining ages. Among these, in fact, we never compared an age with a single preceding and/or following age; instead, we identified homogeneous groups of numerical ages consistent with their reciprocal stratigraphic position. This operation led to the rejection of further numerical ages that deviate erratically from a larger, homogeneous age population which fits well with its stratigraphic position (rejection criterion c). After all of the filtering steps, the valid ages that remained were used for the subdivision of the sedimentary sequences into UBSUs together with the lithological and paleoenvironmental criteria. The numerical ages allowed us, in the first instance, to recognize all of the age gaps between two consecutive samples. Next, we identified the level, in the sedimentary thickness that is between these two samples, that may represent the most suitable UBSU boundary based on its lithology and/or the paleoenvironment. The recognized units are: I) Coppa Nevigata sands (NEA), HTR: MIS 20–14, ATR: MIS 17–16; II) Argille subappennine (ASP), HTR: MIS 15–11, ATR: MIS 15–13; III) Coppa Nevigata synthem (NVI), HTR: MIS 13–8, ATR: MIS 12–11; IV) Sabbie di Torre Quarto (STQ), HTR: MIS 13–9.1, ATR: MIS 11; V) Amendola subsynthem (MLM1), HTR: MIS 12–10, ATR: MIS 11; VI) Undifferentiated continental unit (UCI), HTR: MIS 11–6.2, ATR: MIS 9.3–7.1; VII) Foggia synthem (TGF), ATR: MIS 6; VIII) Masseria Finamondo synthem (TPF), ATR: Upper Pleistocene; IX) Carapelle and Cervaro streams synthem (RPL), subdivided into: IXa) Incoronata subsynthem (RPL1), HTR: MIS 6–3; ATR: MIS 5–3; IXb) Marane La Pidocchiosa–Castello subsynthem (RPL3), ATR: Holocene; X) Masseria Inacquata synthem (NAQ), ATR: Holocene. The possibility of recognizing and dating Quaternary units in an alluvial plain to the scale of a marine isotope stage constitutes a clear step forward compared with similar studies regarding other alluvial-plain areas, where Quaternary units were dated almost exclusively using their stratigraphic position. As a result, they were generically associated with a geological sub-epoch. Instead, our method allowed a higher detail in the timing of the sedimentary processes: for example, MIS 11 and MIS 5.5 deposits have been recognized and characterized for the first time in the study area, highlighting their importance as phases of sedimentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agrobacterium tumefaciens transfers transferred DNA (T-DNA), a single-stranded segment of its tumor-inducing (Ti) plasmid, to the plant cell nucleus. The Ti-plasmid-encoded virulence E2 (VirE2) protein expressed in the bacterium has single-stranded DNA (ssDNA)-binding properties and has been reported to act in the plant cell. This protein is thought to exert its influence on transfer efficiency by coating and accompanying the single-stranded T-DNA (ss-T-DNA) to the plant cell genome. Here, we analyze different putative roles of the VirE2 protein in the plant cell. In the absence of VirE2 protein, mainly truncated versions of the T-DNA are integrated. We infer that VirE2 protects the ss-T-DNA against nucleolytic attack during the transfer process and that it is interacting with the ss-T-DNA on its way to the plant cell nucleus. Furthermore, the VirE2 protein was found not to be involved in directing the ss-T-DNA to the plant cell nucleus in a manner dependent on a nuclear localization signal, a function which is carried by the NLS of VirD2. In addition, the efficiency of T-DNA integration into the plant genome was found to be VirE2 independent. We conclude that the VirE2 protein of A. tumefaciens is required to preserve the integrity of the T-DNA but does not contribute to the efficiency of the integration step per se.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we present a systematic method for the optimal development of bioprocesses that relies on the combined use of simulation packages and optimization tools. One of the main advantages of our method is that it allows for the simultaneous optimization of all the individual components of a bioprocess, including the main upstream and downstream units. The design task is mathematically formulated as a mixed-integer dynamic optimization (MIDO) problem, which is solved by a decomposition method that iterates between primal and master sub-problems. The primal dynamic optimization problem optimizes the operating conditions, bioreactor kinetics and equipment sizes, whereas the master levels entails the solution of a tailored mixed-integer linear programming (MILP) model that decides on the values of the integer variables (i.e., number of equipments in parallel and topological decisions). The dynamic optimization primal sub-problems are solved via a sequential approach that integrates the process simulator SuperPro Designer® with an external NLP solver implemented in Matlab®. The capabilities of the proposed methodology are illustrated through its application to a typical fermentation process and to the production of the amino acid L-lysine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND With increasing demand for umbilical cord blood units (CBUs) with total nucleated cell (TNC) counts of more than 150 × 10(7) , preshipping assessment is mandatory. Umbilical cord blood processing requires aseptic techniques and laboratories with specific air quality and cleanliness. Our aim was to establish a fast and efficient method for determining TNC counts at the obstetric ward without exposing the CBU to the environment. STUDY DESIGN AND METHODS Data from a total of 151 cord blood donations at a single procurement site were included in this prospective study. We measured TNC counts in cord blood aliquots taken from the umbilical cord (TNCCord ), from placenta (TNCPlac ), and from a tubing segment of the sterile collection system (TNCTS ). TNC counts were compared to reference TNC counts in the CBU which were ascertained at the cord blood bank (TNCCBU ). RESULTS TNCTS counts (173 ± 33 × 10(7) cells; calculated for 1 unit) correlated fully with the TNCCBU reference counts (166 ± 33 × 10(7) cells, Pearson's r = 0.97, p < 0.0001). In contrast, TNCCord and TNCPlac counts were more disparate from the reference (r = 0.92 and r = 0.87, respectively). CONCLUSIONS A novel method of measuring TNC counts in tubing segments from the sterile cord blood collection system allows rapid and correct identification of CBUs with high cell numbers at the obstetric ward without exposing cells to the environment. This approach may contribute to cost efficacy as only CBUs with satisfactory TNC counts need to be shipped to the cord blood bank.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to identify benchmark cost-efficient General Practitioner (GP) units at delivering health care in the Geriatric and General Medicine (GMG) specialty and estimate potential cost savings. The use of a single medical specialty makes it possible to reflect more accurately the medical condition of the List population of the Practice so as to contextualize its expenditure on care for patients. We use Data Envelopment Analysis (DEA) to estimate the potential for cost savings at GP units and to decompose these savings into those attributable to the reduction of resource use, to altering the mix of resources used and to those attributable to securing better resource 'prices'. The results reveal a considerable potential for savings of varying composition across GP units. © 2013 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the growing concerns associated with fossil fuels, emphasis has been placed on clean and sustainable energy generation. This has resulted in the increase in Photovoltaics (PV) units being integrated into the utility system. The integration of PV units has raised some concerns for utility power systems, including the consequences of failing to detect islanding. Numerous methods for islanding detection have been introduced in literature. They can be categorized into local methods and remote methods. The local methods are categorically divided into passive and active methods. Active methods generally have smaller Non-Detection Zone (NDZ) but the injecting disturbances will slightly degrade the power quality and reliability of the power system. Slip Mode Frequency Shift Islanding Detection Method (SMS IDM) is an active method that uses positive feedback for islanding detection. In this method, the phase angle of the converter is controlled to have a sinusoidal function of the deviation of the Point of Common Coupling (PCC) voltage frequency from the nominal grid frequency. This method has a non-detection zone which means it fails to detect islanding for specific local load conditions. If the SMS IDM employs a different function other than the sinusoidal function for drifting the phase angle of the inverter, its non-detection zone could be smaller. In addition, Advanced Slip Mode Frequency Shift Islanding Detection Method (Advanced SMS IDM), which has been introduced in this thesis, eliminates the non-detection zone of the SMS IDM. In this method the parameters of SMS IDM change based on the local load impedance value. Moreover, the stability of the system is investigated by developing the dynamical equations of the system for two operation modes; grid connected and islanded mode. It is mathematically proven that for some loading conditions the nominal frequency is an unstable point and the operation frequency slides to another stable point, while for other loading conditions the nominal frequency is the only stable point of the system upon islanding occurring. Simulation and experimental results show the accuracy of the proposed methods in detection of islanding and verify the validity of the mathematical analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Organismal development, homeostasis, and pathology are rooted in inherently probabilistic events. From gene expression to cellular differentiation, rates and likelihoods shape the form and function of biology. Processes ranging from growth to cancer homeostasis to reprogramming of stem cells all require transitions between distinct phenotypic states, and these occur at defined rates. Therefore, measuring the fidelity and dynamics with which such transitions occur is central to understanding natural biological phenomena and is critical for therapeutic interventions.

While these processes may produce robust population-level behaviors, decisions are made by individual cells. In certain circumstances, these minuscule computing units effectively roll dice to determine their fate. And while the 'omics' era has provided vast amounts of data on what these populations are doing en masse, the behaviors of the underlying units of these processes get washed out in averages.

Therefore, in order to understand the behavior of a sample of cells, it is critical to reveal how its underlying components, or mixture of cells in distinct states, each contribute to the overall phenotype. As such, we must first define what states exist in the population, determine what controls the stability of these states, and measure in high dimensionality the dynamics with which these cells transition between states.

To address a specific example of this general problem, we investigate the heterogeneity and dynamics of mouse embryonic stem cells (mESCs). While a number of reports have identified particular genes in ES cells that switch between 'high' and 'low' metastable expression states in culture, it remains unclear how levels of many of these regulators combine to form states in transcriptional space. Using a method called single molecule mRNA fluorescent in situ hybridization (smFISH), we quantitatively measure and fit distributions of core pluripotency regulators in single cells, identifying a wide range of variabilities between genes, but each explained by a simple model of bursty transcription. From this data, we also observed that strongly bimodal genes appear to be co-expressed, effectively limiting the occupancy of transcriptional space to two primary states across genes studied here. However, these states also appear punctuated by the conditional expression of the most highly variable genes, potentially defining smaller substates of pluripotency.

Having defined the transcriptional states, we next asked what might control their stability or persistence. Surprisingly, we found that DNA methylation, a mark normally associated with irreversible developmental progression, was itself differentially regulated between these two primary states. Furthermore, both acute or chronic inhibition of DNA methyltransferase activity led to reduced heterogeneity among the population, suggesting that metastability can be modulated by this strong epigenetic mark.

Finally, because understanding the dynamics of state transitions is fundamental to a variety of biological problems, we sought to develop a high-throughput method for the identification of cellular trajectories without the need for cell-line engineering. We achieved this by combining cell-lineage information gathered from time-lapse microscopy with endpoint smFISH for measurements of final expression states. Applying a simple mathematical framework to these lineage-tree associated expression states enables the inference of dynamic transitions. We apply our novel approach in order to infer temporal sequences of events, quantitative switching rates, and network topology among a set of ESC states.

Taken together, we identify distinct expression states in ES cells, gain fundamental insight into how a strong epigenetic modifier enforces the stability of these states, and develop and apply a new method for the identification of cellular trajectories using scalable in situ readouts of cellular state.