933 resultados para THIRD GENERATION SYSTEMS
Resumo:
There have been numerous attempts to reveal the neurobiological basis of schizophrenia spectrum disorders. Results however, remain as heterogeneous as the schizophrenia spectrum disorders itself. Therefore, one aim of this thesis was to divide patients affected by this disorder into subgroups in order to homogenize the results of future studies. In a first study it is suggested that psychopathological rating scales should focus on symptoms-clusters that may have a common neurophysiological background. The here presented Bern Psychopathology Scale (BPS) proposes that alterations in three wellknown brain systems (motor, language, and affective) are largely leading to the communication failures observable on a behavioral level, but also - as repeatedly hypothesized - to dysconnectivity within and between brain systems in schizophrenia spectrum disorders. The external validity of the motor domain in the BPS was tested against the objective measure of 24 hours wrist actigraphy, in a second study. The subjective, the quantitative, as well as the global rating of the degree of motor disorders in this patient group showed significant correlations to the acquired motor activity. This result confirmed in a first step the practicability of the motor domain of the BPS, but needs further validation regarding pathological brain alterations. Finally, in a third study (independent from the two other studies), two cerebral Resting State Networks frequently altered in schizophrenia were investigated for the first time using simultaneous EEG/fMRI: The well-known default mode network and the left working memory network. Besides the changes in these fMRI-based networks, there are well-documented findings that patients exhibit alterations in EEG spectra compared to healthy controls. However, only through the multimodal approach it was possible to discover that patients with schizophrenia spectrum disorders have a slower driving frequency of the Resting State Networks compared to the matched healthy controls. Such a dysfunctional coupling between neuronal frequency and functional brain organization could explain in a uni- or multifactorial way (dysfunctional cross-frequency coupling, maturational effects, vigilance fluctuations, task-related suppression), how the typical psychotic symptoms might occur. To conclude, the major contributions presented in this thesis were on one hand the development of a psychopathology rating scale that is based on the assumption of dysfunctional brain networks, as well as the new evidence of a dysfunctional triggering frequency of Resting State Networks from the simultaneous EEG/fMRI study in patients affected by a schizophrenia spectrum disorder.
Resumo:
Transapical transcatheter aortic valve implantation (TA-TAVI) is the recognized first choice surgical TAVI access. Expansion of this well-established treatment modality with subsequent broader patient inclusion has accelerated development of second-generation TA-TAVI devices. The Swiss ACURATE TA Symetis valve allows for excellent anatomical positioning, resulting in a very low incidence of paravalvular leaks. The self-expanding stent features an hourglass shape to wedge the native aortic valve annulus. A specially designed delivery system facilitates controlled release aided by tactile operator feedback. The ACURATE TA valve made of three native porcine non-coronary leaflets has received CE approval in September 2011. Since then, this valve is the third most frequently implanted TAVI device with over 1200 implants in Europe and South America. Results from the Symetis ACURATE TA™ Valve Implantation ('SAVI') Registry showed a procedural success rate of 98.0% and a survival rate of 93.2% at 30 days. This presentation provides technical considerations and detailed procedural aspects of device implantation.
Resumo:
Acid rock drainage (ARD) is a problem of international relevance with substantial environmental and economic implications. Reactive transport modeling has proven a powerful tool for the process-based assessment of metal release and attenuation at ARD sites. Although a variety of models has been used to investigate ARD, a systematic model intercomparison has not been conducted to date. This contribution presents such a model intercomparison involving three synthetic benchmark problems designed to evaluate model results for the most relevant processes at ARD sites. The first benchmark (ARD-B1) focuses on the oxidation of sulfide minerals in an unsaturated tailing impoundment, affected by the ingress of atmospheric oxygen. ARD-B2 extends the first problem to include pH buffering by primary mineral dissolution and secondary mineral precipitation. The third problem (ARD-B3) in addition considers the kinetic and pH-dependent dissolution of silicate minerals under low pH conditions. The set of benchmarks was solved by four reactive transport codes, namely CrunchFlow, Flotran, HP1, and MIN3P. The results comparison focused on spatial profiles of dissolved concentrations, pH and pE, pore gas composition, and mineral assemblages. In addition, results of transient profiles for selected elements and cumulative mass loadings were considered in the intercomparison. Despite substantial differences in model formulations, very good agreement was obtained between the various codes. Residual deviations between the results are analyzed and discussed in terms of their implications for capturing system evolution and long-term mass loading predictions.
Resumo:
The enzymatic co-polymerization of modified nucleoside triphosphates (dN*TPs and N*TPs) is a versatile method for the expansion and exploration of expanded chemical space in SELEX and related combinatorial methods of in vitro selection. This strategy can be exploited to generate aptamers with improved or hitherto unknown properties. In this review, we discuss the nature of the functionalities appended to nucleoside triphosphates and their impact on selection experiments. The properties of the resulting modified aptamers will be described, particularly those integrated in the fields of biomolecular diagnostics, therapeutics, and in the expansion of genetic systems (XNAs).
Resumo:
OBJECTIVES The photoinitiator diphenyl-(2,4,6-trimethylbenzoyl)phosphine oxide (TPO) is more reactive than a camphorquinone/amine (CQ) system, and TPO-based adhesives obtained a higher degree of conversion (DC) with fewer leached monomers. The hypothesis tested here is that a TPO-based adhesive is less toxic than a CQ-based adhesive. METHODS A CQ-based adhesive (SBU-CQ) (Scotchbond Universal, 3M ESPE) and its experimental counterpart with TPO (SBU-TPO) were tested for cytotoxicity in human pulp-derived cells (tHPC). Oxidative stress was analyzed by the generation of reactive oxygen species (ROS) and by the expression of antioxidant enzymes. A dentin barrier test (DBT) was used to evaluate cell viability in simulated clinical circumstances. RESULTS Unpolymerized SBU-TPO was significantly more toxic than SBU-CQ after a 24h exposure, and TPO alone (EC50=0.06mM) was more cytotoxic than CQ (EC50=0.88mM), EDMAB (EC50=0.68mM) or CQ/EDMAB (EC50=0.50mM). Cultures preincubated with BSO (l-buthionine sulfoximine), an inhibitor of glutathione synthesis, indicated a minor role of glutathione in cytotoxic responses toward the adhesives. Although the generation of ROS was not detected, a differential expression of enzymatic antioxidants revealed that cells exposed to unpolymerized SBU-TPO or SBU-CQ are subject to oxidative stress. Polymerized SBU-TPO was more cytotoxic than SBU-CQ under specific experimental conditions only, but no cytotoxicity was detected in a DBT with a 200μm dentin barrier. SIGNIFICANCE Not only DC and monomer-release determine the biocompatibility of adhesives, but also the cytotoxicity of the (photo-)initiator should be taken into account. Addition of TPO rendered a universal adhesive more toxic compared to CQ; however, this effect could be annulled by a thin dentin barrier.
Resumo:
Technology advances in hardware, software and IP-networks such as the Internet or peer-to-peer file sharing systems are threatening the music business. The result has been an increasing amount of illegal copies available on-line as well as off-line. With the emergence of digital rights management systems (DRMS), the music industry seems to have found the appropriate tool to simultaneously fight piracy and to monetize their assets. Although these systems are very powerful and include multiple technologies to prevent piracy, it is as of yet unknown to what extent such systems are currently being used by content providers. We provide empirical analyses, results, and conclusions related to digital rights management systems and the protection of digital content in the music industry. It shows that most content providers are protecting their digital content through a variety of technologies such as passwords or encryption. However, each protection technology has its own specific goal, and not all prevent piracy. The majority of the respondents are satisfied with their current protection but want to reinforce it for the future, due to fear of increasing piracy. Surprisingly, although encryption is seen as the core DRM technology, only few companies are currently using it. Finally, half of the respondents do not believe in the success of DRMS and their ability to reduce piracy.
Resumo:
This is the third paper in a four-part series considering the fundamental question, “what does the word “height” really mean?” The first paper reviewed reference ellipsoids and mean sea level datums. The second paper reviewed the physics of heights culminating in a simple development of the geoid and explained why mean sea level stations are not all at the same orthometric height. This third paper develops the principle notions of height, namely measured, differentially deduced changes in elevation, orthometric heights, Helmert orthometric heights, normal orthometric heights, dynamic heights, and geopotential numbers. We conclude with a more in-depth discussion of current thoughts regarding the geoid.
Resumo:
The three articles that comprise this dissertation describe how small area estimation and geographic information systems (GIS) technologies can be integrated to provide useful information about the number of uninsured and where they are located. Comprehensive data about the numbers and characteristics of the uninsured are typically only available from surveys. Utilization and administrative data are poor proxies from which to develop this information. Those who cannot access services are unlikely to be fully captured, either by health care provider utilization data or by state and local administrative data. In the absence of direct measures, a well-developed estimation of the local uninsured count or rate can prove valuable when assessing the unmet health service needs of this population. However, the fact that these are “estimates” increases the chances that results will be rejected or, at best, treated with suspicion. The visual impact and spatial analysis capabilities afforded by geographic information systems (GIS) technology can strengthen the likelihood of acceptance of area estimates by those most likely to benefit from the information, including health planners and policy makers. ^ The first article describes how uninsured estimates are currently being performed in the Houston metropolitan region. It details the synthetic model used to calculate numbers and percentages of uninsured, and how the resulting estimates are integrated into a GIS. The second article compares the estimation method of the first article with one currently used by the Texas State Data Center to estimate numbers of uninsured for all Texas counties. Estimates are developed for census tracts in Harris County, using both models with the same data sets. The results are statistically compared. The third article describes a new, revised synthetic method that is being tested to provide uninsured estimates at sub-county levels for eight counties in the Houston metropolitan area. It is being designed to replicate the same categorical results provided by a current U.S. Census Bureau estimation method. The estimates calculated by this revised model are compared to the most recent U.S. Census Bureau estimates, using the same areas and population categories. ^
Resumo:
To reach the goals established by the Institute of Medicine (IOM) and the Centers for Disease Control's (CDC) STOP TB USA, measures must be taken to curtail a future peak in Tuberculosis (TB) incidence and speed the currently stagnant rate of TB elimination. Both efforts will require, at minimum, the consideration and understanding of the third dimension of TB transmission: the location-based spread of an airborne pathogen among persons known and unknown to each other. This consideration will require an elucidation of the areas within the U.S. that have endemic TB. The Houston Tuberculosis Initiative (HTI) was a population-based active surveillance of confirmed Houston/Harris County TB cases from 1995–2004. Strengths in this dataset include the molecular characterization of laboratory confirmed cases, the collection of geographic locations (including home addresses) frequented by cases, and the HTI time period that parallels a decline in TB incidence in the United States (U.S.). The HTI dataset was used in this secondary data analysis to implement a GIS analysis of TB cases, the locations frequented by cases, and their association with risk factors associated with TB transmission. ^ This study reports, for the first time, the incidence of TB among the homeless in Houston, Texas. The homeless are an at-risk population for TB disease, yet they are also a population whose TB incidence has been unknown and unreported due to their non-enumeration. The first section of this dissertation identifies local areas in Houston with endemic TB disease. Many Houston TB cases who reported living in these endemic areas also share the TB risk factor of current or recent homelessness. Merging the 2004–2005 Houston enumeration of the homeless with historical HTI surveillance data of TB cases in Houston enabled this first-time report of TB risk among the homeless in Houston. The homeless were more likely to be US-born, belong to a genotypic cluster, and belong to a cluster of a larger size. The calculated average incidence among homeless persons was 411/100,000, compared to 9.5/100,000 among housed. These alarming rates are not driven by a co-infection but by social determinants. The unsheltered persons were hospitalized more days and required more follow-up time by staff than those who reported a steady housing situation. The homeless are a specific example of the increased targeting of prevention dollars that could occur if TB rates were reported for specific areas with known health disparities rather than as a generalized rate normalized over a diverse population. ^ It has been estimated that 27% of Houstonians use public transportation. The city layout allows bus routes to run like veins connecting even the most diverse of populations within the metropolitan area. Secondary data analysis of frequent bus use (defined as riding a route weekly) among TB cases was assessed for its relationship with known TB risk factors. The spatial distribution of genotypic clusters associated with bus use was assessed, along with the reported routes and epidemiologic-links among cases belonging to the identified clusters. ^ TB cases who reported frequent bus use were more likely to have demographic and social risk factors associated with poverty, immune suppression and health disparities. An equal proportion of bus riders and non-bus riders were cultured for Mycobacterium tuberculosis, yet 75% of bus riders were genotypically clustered, indicating recent transmission, compared to 56% of non-bus riders (OR=2.4, 95%CI(2.0, 2.8), p<0.001). Bus riders had a mean cluster size of 50.14 vs. 28.9 (p<0.001). Second order spatial analysis of clustered fingerprint 2 (n=122), a Beijing family cluster, revealed geographic clustering among cases based on their report of bus use. Univariate and multivariate analysis of routes reported by cases belonging to these clusters found that 10 of the 14 clusters were associated with use. Individual Metro routes, including one route servicing the local hospitals, were found to be risk factors for belonging to a cluster shown to be endemic in Houston. The routes themselves geographically connect the census tracts previously identified as having endemic TB. 78% (15/23) of Houston Metro routes investigated had one or more print groups reporting frequent use for every HTI study year. We present data on three specific but clonally related print groups and show that bus-use is clustered in time by route and is the only known link between cases in one of the three prints: print 22. (Abstract shortened by UMI.)^
Resumo:
Recent developments in federal policy have prompted the creation of state evaluation frameworks for principals and teachers that hold educators accountable for effective practices and student outcomes. These changes have created a demand for formative evaluation instruments that reflect current accountability pressures and can be used by schools to focus school improvement and leadership development efforts. The Comprehensive Assessment of Leadership for Learning (CALL) is a next generation, 360-degree on-line assessment and feedback system that reflect best practices in feedback design. Some unique characteristics of CALL include a focus on: leadership distributed throughout the school rather than as carried out by an individual leader; assessment of leadership tasks rather than perceptions of leadership practice; a focus on larger complex systems of middle and high school; and transparency of assessment design. This paper describes research contributing to the design and validation of the CALL survey instrument.
Resumo:
Feline immunodeficiency virus (FIV)-based gene transfer systems are being seriously considered for human gene therapy as an alternative to vectors based on primate lentiviruses, a genetically complex group of retroviruses capable of infecting non-dividing cells. The greater phylogenetic distance between the feline and primate lentiviruses is thought to reduce chances of the generation of recombinant viruses. However, safety of FIV-based vector systems has not been tested experimentally. Since primate lentiviruses such as human and simian immunodeficiency viruses (HIV/SIV) can cross-package each other's genomes, we tested this trait with respect to FIV. Unexpectedly, both feline and primate lentiviruses were reciprocally able to both cross-package and propagate each other's RNA genomes. This was largely due to the recognition of viral packaging signals by the heterologous proteins. However, a simple retrovirus such as Mason-Pfizer monkey virus (MPMV) was unable to package FIV RNA. Interestingly, FIV could package MPMV RNA, but not propagate it for further steps of replication. These findings suggest that upon co-infection of the same host, cross-packaging may allow distinct retroviruses to generate chimeric variants with unknown pathogenic potential. ^ In order to understand the packaging determinants in FIV, we conducted a detailed mutational analysis of the region thought to contain FIV packaging signal. We show that the first 90–120 nt of the 5′ untranslated region (UTR) and the first 90 nt of gag were simultaneously required for efficient FIV RNA packaging. These results suggest that the primary FIV packaging signal is multipartite and discontinuous, composed of two core elements separated by 150 nt of the 5 ′UTR. ^ The above studies are being used towards the development of safer FIV-based self-inactivating (SIN) vectors. These vectors are being designed to eliminate the ability of FIV transfer vector RNAs to be mobilized by primate lentiviral proteins that may be present in the target cells. Preliminary test of the first generation of these vectors has revealed that they are incapable of being propagated by feline proteins. The inability of FIV transfer vectors to express packageable vector RNA after integration should greatly increase the safety of FIV vectors for human gene therapy. ^
Resumo:
Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.
Resumo:
This article analyses the long-term performance of collective off-grid photovoltaic (PV) systems in rural areas. The use of collective PV systems for the electrification of small medium-size villages in developing countries has increased in the recent years. They are basically set up as stand-alone installations (diesel hybrid or pure PV) with no connection with other electrical grids. Their particular conditions (isolated) and usual installation places (far from commercial/industrial centers) require an autonomous and reliable technology. Different but related factors affect their performance and the energy supply; some of them are strictly technical but others depend on external issues like the solar energy resource and users’ energy and power consumption. The work presented is based on field operation of twelve collective PV installations supplying the electricity to off-grid villages located in the province of Jujuy, Argentina. Five of them have PV generators as unique power source while other seven include the support of diesel groups. Load demand evolution, energy productivity and fuel consumption are analyzed. Besides, energy generation strategies (PV/diesel) are also discussed.
Resumo:
Multicarrier transmission such as OFDM (orthogonal frequency division multiplexing) is an established technique for radio transmission systems and it can be considered as a promising approach for next generation wireless systems. However, in order to comply with the demand on increasing available data rates in particular in wireless technologies, systems with multiple transmit and receive antennas, also called MIMO (multiple-input multiple-output) systems, have become indispensable for future generations of wireless systems. Due to the strongly increasing demand in high-data rate transmission systems, frequency non-selective MIMO links have reached a state of maturity and frequency selective MIMO links are in the focus of interest. In this field, the combination of MIMO transmission and OFDM can be considered as an essential part of fulfilling the requirements of future generations of wireless systems. However, single-user scenarios have reached a state of maturity. By contrast multiple users' scenarios require substantial further research, where in comparison to ZF (zero-forcing) multiuser transmission techniques, the individual user's channel characteristics are taken into consideration in this contribution. The performed joint optimization of the number of activated MIMO layers and the number of transmitted bits per subcarrier shows that not necessarily all user-specific MIMO layers per subcarrier have to be activated in order to minimize the overall BER under the constraint of a given fixed data throughput.