942 resultados para State Extension Problem
Resumo:
Increasing attention has been given to the problem of medical errors over the past decade. Included within that focused attention has been a strong interest in reducing the occurrence of healthcare-associated infections (HAIs). Acting concurrently with federal initiatives, the majority of U.S. states have statutorily required reporting and public disclosure of HAI data. Although the occurrence of these state statutory enactments and other state initiatives represent a recognition of the strong concern pertaining to HAIs, vast differences in each state’s HAI reporting and public disclosure requirements creates a varied and unequal response to what has become a national problem.^ The purpose of this research was to explore the variations in state HAI legal requirements and other state mandates. State actions, including statutory enactments, regulations, and other initiatives related to state reporting and public disclosure mechanisms were compared, discussed, and analyzed in an effort to illustrate the impact of the lack of uniformity as a public health concern.^ The HAI statutes, administrative requirements, and other mandates of each state and two U.S. territories were reviewed to answer the following seven research questions: How far has the state progressed in its HAI initiative? If the state has a HAI reporting requirement, is it mandatory or voluntary? What healthcare entities are subject to the reporting requirements? What data collection system is utilized? What measures are required to be reported? What is the public disclosure mechanism? How is the underlying reported information protected from public disclosure or other legal release?^ Secondary publicly available data, including state statutes, administrative rules, and other initiatives, were utilized to examine the current HAI-related legislative and administrative activity of the study subjects. The information was reviewed and analyzed to determine variations in HAI reporting and public disclosure laws. Particular attention was given to the seven key research questions.^ The research revealed that considerable progress has been achieved in state HAI initiatives since 2004. Despite this progress, however, when reviewing the state laws and HAI programs comparatively, considerable variations were found to exist with regards to the type of reporting requirements, healthcare facilities subject to the reporting laws, data collection systems utilized, reportable measures, public disclosure requirements, and confidentiality and privilege provisions. The wide variations in state statutes, administrative rules, and other agency directives create a fragmented and inconsistent approach to addressing the nationwide occurrence of HAIs in the U.S. healthcare system. ^
Resumo:
The present study analyzed some of the effects of imposing a cost-sharing requirement on users of a state's health service program. The study population consisted of people who were in diagnosed medical need and included, but was not limited to, people in financial need.^ The purpose of the study was to determine if the cost-sharing requirement had any detrimental effects on the service population. Changes in the characteristics of service consumers and in utilization patterns were analyzed using time-series techniques and pre-post policy comparisons.^ The study hypotheses stated that the distribution of service provided, diagnoses serviced, and consumer income levels would change following the cost-sharing policy.^ Analysis of data revealed that neither the characteristics of service users (income, race, sex, etc.) nor services provided by the program changed significantly following the policy. The results were explainable in part by the fact that all of the program participants were in diagnosed medical need. Therefore, their use of "discretionary" or "less necessary" services was limited.^ The study's findings supported the work of Joseph Newhouse, Charles Phelps, and others who have contended that necessary service use would not be detrimentally affected by reasonable cost-sharing provisions. These contentions raise the prospect of incorporating cost-sharing into programs such as Medicaid, which, at this writing, do not demand any consumer payment for services.^ The study concluded with a discussion of the cost-containment problem in health services. The efficacy of cost-sharing was considered relative to other financing and reimbursement strategies such as HMO's, self-funding, and reimbursement for less costly services and places of service. ^
Commercial Sexual Exploitation and Missing Children in the Coastal Region of Sao Paulo State, Brazil
Resumo:
The commercial sexual exploitation of children (CSEC) has emerged as one of the world’s most heinous crimes. The problem affects millions of children worldwide and no country or community is fully immune from its effects. This paper reports first generation research of the relationship that exists between CSEC and the phenomenon of missing children living in and around the coastal regions of the state of Sao Paulo, Brazil, the country’s richest State. Data are reported from interviews and case records of 64 children and adolescents, who were receiving care through a major youth serving non-governmental organization (NGO) located in the coastal city of Sao Vicente. Also, data about missing children and adolescents were collected from Police Reports – a total of 858 Police Reports. In Brazil, prostitution is not a crime itself, however, the exploitation of prostitution is a crime. Therefore, the police have no information about children or adolescents in this situation, they only have information about the clients and exploiters. Thus, this investigation sought to accomplish two objectives: 1) to establish the relationship between missing and sexual exploited children; and 2) to sensitize police and child-serving authorities in both the governmental and nongovernmental sectors to the nature, extent, and seriousness of many unrecognized cases of CSEC and missing children that come to their attention. The observed results indicated that the missing children police report are significantly underestimated. They do not represent the number of children that run away and/or are involved in commercial sexual exploitation.
Resumo:
It is well known that an identification problem exists in the analysis of age-period-cohort data because of the relationship among the three factors (date of birth + age at death = date of death). There are numerous suggestions about how to analyze the data. No one solution has been satisfactory. The purpose of this study is to provide another analytic method by extending the Cox's lifetable regression model with time-dependent covariates. The new approach contains the following features: (1) It is based on the conditional maximum likelihood procedure using a proportional hazard function described by Cox (1972), treating the age factor as the underlying hazard to estimate the parameters for the cohort and period factors. (2) The model is flexible so that both the cohort and period factors can be treated as dummy or continuous variables, and the parameter estimations can be obtained for numerous combinations of variables as in a regression analysis. (3) The model is applicable even when the time period is unequally spaced.^ Two specific models are considered to illustrate the new approach and applied to the U.S. prostate cancer data. We find that there are significant differences between all cohorts and there is a significant period effect for both whites and nonwhites. The underlying hazard increases exponentially with age indicating that old people have much higher risk than young people. A log transformation of relative risk shows that the prostate cancer risk declined in recent cohorts for both models. However, prostate cancer risk declined 5 cohorts (25 years) earlier for whites than for nonwhites under the period factor model (0 0 0 1 1 1 1). These latter results are similar to the previous study by Holford (1983).^ The new approach offers a general method to analyze the age-period-cohort data without using any arbitrary constraint in the model. ^
Resumo:
The Two State model describes how drugs activate receptors by inducing or supporting a conformational change in the receptor from “off” to “on”. The beta 2 adrenergic receptor system is the model system which was used to formalize the concept of two states, and the mechanism of hormone agonist stimulation of this receptor is similar to ligand activation of other seven transmembrane receptors. Hormone binding to beta 2 adrenergic receptors stimulates the intracellular production of cyclic adenosine monophosphate (cAMP), which is mediated through the stimulatory guanyl nucleotide binding protein (Gs) interacting with the membrane bound enzyme adenylylcyclase (AC). ^ The effects of cAMP include protein phosphorylation, metabolic regulation and transcriptional regulation. The beta 2 adrenergic receptor system is the most well known of its family of G protein coupled receptors. Ligands have been scrutinized extensively in search of more effective therapeutic agents at this receptor as well as for insight into the biochemical mechanism of receptor activation. Hormone binding to receptor is thought to induce a conformational change in the receptor that increases its affinity for inactive Gs, catalyzes the release of GDP and subsequent binding of GTP and activation of Gs. ^ However, some beta 2 ligands are more efficient at this transformation than others, and the underlying mechanism for this drug specificity is not fully understood. The central problem in pharmacology is the characterization of drugs in their effect on physiological systems, and consequently, the search for a rational scale of drug effectiveness has been the effort of many investigators, which continues to the present time as models are proposed, tested and modified. ^ The major results of this thesis show that for many b2 -adrenergic ligands, the Two State model is quite adequate to explain their activity, but dobutamine (+/−3,4-dihydroxy-N-[3-(4-hydroxyphenyl)-1-methylpropyl]- b -phenethylamine) fails to conform to the predictions of the Two State model. It is a weak partial agonist, but it forms a large amount of high affinity complexes, and these complexes are formed at low concentrations much better than at higher concentrations. Finally, dobutamine causes the beta 2 adrenergic receptor to form high affinity complexes at a much faster rate than can be accounted for by its low efficiency activating AC. Because the Two State model fails to predict the activity of dobutamine in three different ways, it has been disproven in its strictest form. ^
Resumo:
In this paper we present a research that took place between 2010 and 2012 included in an investigation scholarship awarded by the State University of la Plata. It is about the problem with the transition between college and professional work. It is a part of the produced studies on the importance of social representations as factors that impact on the performance of specific activities. In this case it's about finding out the relations given among the representations about graduated professional role of the Psychology career and its job insertion and performance. The theoretical framework corresponds to Social Psychology and Guidance theories. Methodologically this is an exploratory and descriptive study, based on the 'triangulation' conception, of multiple type, that allows combining in the same investigation, different strategies, theoretical perspectives and sources; however qualitative techniques were prioritized to analyze data. Finally there are some considerations about the social representations concerning to the professional performance, mainly in the clinical field associated to education, and also to the problems of both situations over other fields
Resumo:
In this paper we present a research that took place between 2010 and 2012 included in an investigation scholarship awarded by the State University of la Plata. It is about the problem with the transition between college and professional work. It is a part of the produced studies on the importance of social representations as factors that impact on the performance of specific activities. In this case it's about finding out the relations given among the representations about graduated professional role of the Psychology career and its job insertion and performance. The theoretical framework corresponds to Social Psychology and Guidance theories. Methodologically this is an exploratory and descriptive study, based on the 'triangulation' conception, of multiple type, that allows combining in the same investigation, different strategies, theoretical perspectives and sources; however qualitative techniques were prioritized to analyze data. Finally there are some considerations about the social representations concerning to the professional performance, mainly in the clinical field associated to education, and also to the problems of both situations over other fields
Resumo:
In this paper we present a research that took place between 2010 and 2012 included in an investigation scholarship awarded by the State University of la Plata. It is about the problem with the transition between college and professional work. It is a part of the produced studies on the importance of social representations as factors that impact on the performance of specific activities. In this case it's about finding out the relations given among the representations about graduated professional role of the Psychology career and its job insertion and performance. The theoretical framework corresponds to Social Psychology and Guidance theories. Methodologically this is an exploratory and descriptive study, based on the 'triangulation' conception, of multiple type, that allows combining in the same investigation, different strategies, theoretical perspectives and sources; however qualitative techniques were prioritized to analyze data. Finally there are some considerations about the social representations concerning to the professional performance, mainly in the clinical field associated to education, and also to the problems of both situations over other fields
Resumo:
Thecosome pteropods (shelled pelagic molluscs) can play an important role in the food web of various ecosystems and play a key role in the cycling of carbon and carbonate. Since they harbor an aragonitic shell, they could be very sensitive to ocean acidification driven by the increase of anthropogenic CO2 emissions. The impact of changes in the carbonate chemistry was investigated on Limacina helicina, a key species of Arctic ecosystems. Pteropods were kept in culture under controlled pH conditions corresponding to pCO2 levels of 350 and 760 µatm. Calcification was estimated using a fluorochrome and the radioisotope 45Ca. It exhibits a 28 % decrease at the pH value expected for 2100 compared to the present pH value. This result supports the concern for the future of pteropods in a high-CO2 world, as well as of those species dependent upon them as a food resource. A decline of their populations would likely cause dramatic changes to the structure, function and services of polar ecosystems.
Resumo:
A clash between the police and journalists covering a Falun Gong gathering in Surabaya 2011 have shown a significant change in understanding the triangular relationship between Indonesia, China and the Ethnic Chinese in Indonesia. During the Suharto period, ethnic Chinese in Indonesia and China as a foreign state were the problems for the Indonesian government. After the political reforms in Indonesia together with the Rise of China in 2000s, in some situation, it is the Indonesian government together with the Chinese government which is the problem for some ethnic Chinese in Indonesia. Ethnic Chinese people were seen to be close with China and their loyalty to the nation was doubted. But now it is the Indonesian government which is viewed as being too close to China and thus harming national integrity, and suspected of being unnationalistic.
Resumo:
This paper develops a quantitative measure of allocation efficiency, which is an extension of the dynamic Olley-Pakes productivity decomposition proposed by Melitz and Polanec (2015). The extended measure enables the simultaneous capture of the degree of misallocation within a group and between groups and parallel to capturing the contribution of entering and exiting firms to aggregate productivity growth. This measure empirically assesses the degree of misallocation in China using manufacturing firm-level data from 2004 to 2007. Misallocation among industrial sectors has been found to increase over time, and allocation efficiency within an industry has been found to worsen in industries that use more capital and have firms with relatively higher state-owned market shares. Allocation efficiency among three ownership sectors (state-owned, domestic private, and foreign sectors) tends to improve in industries wherein the market share moves from a less-productive state-owned sector to a more productive private sector.
Resumo:
Within the framework of the Collaborative Project for a European Sodium Fast Reactor, the reactor physics group at UPM is working on the extension of its in-house multi-scale advanced deterministic code COBAYA3 to Sodium Fast Reactors (SFR). COBAYA3 is a 3D multigroup neutron kinetics diffusion code that can be used either as a pin-by-pin code or as a stand-alone nodal code by using the analytic nodal diffusion solver ANDES. It is coupled with thermalhydraulics codes such as COBRA-TF and FLICA, allowing transient analysis of LWR at both fine-mesh and coarse-mesh scales. In order to enable also 3D pin-by-pin and nodal coupled NK-TH simulations of SFR, different developments are in progress. This paper presents the first steps towards the application of COBAYA3 to this type of reactors. ANDES solver, already extended to triangular-Z geometry, has been applied to fast reactor steady-state calculations. The required cross section libraries were generated with ERANOS code for several configurations. The limitations encountered in the application of the Analytic Coarse Mesh Finite Difference (ACMFD) method –implemented inside ANDES– to fast reactors are presented and the sensitivity of the method when using a high number of energy groups is studied. ANDES performance is assessed by comparison with the results provided by ERANOS, using a mini-core model in 33 energy groups. Furthermore, a benchmark from the NEA for a small 3D FBR in hexagonal-Z geometry and 4 energy groups is also employed to verify the behavior of the code with few energy groups.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Las fuentes de alimentación de modo conmutado (SMPS en sus siglas en inglés) se utilizan ampliamente en una gran variedad de aplicaciones. La tarea más difícil para los diseñadores de SMPS consiste en lograr simultáneamente la operación del convertidor con alto rendimiento y alta densidad de energía. El tamaño y el peso de un convertidor de potencia está dominado por los componentes pasivos, ya que estos elementos son normalmente más grandes y más pesados que otros elementos en el circuito. Para una potencia de salida dada, la cantidad de energía almacenada en el convertidor que ha de ser entregada a la carga en cada ciclo de conmutación, es inversamente proporcional a la frecuencia de conmutación del convertidor. Por lo tanto, el aumento de la frecuencia de conmutación se considera un medio para lograr soluciones más compactas con los niveles de densidad de potencia más altos. La importancia de investigar en el rango de alta frecuencia de conmutación radica en todos los beneficios que se pueden lograr: además de la reducción en el tamaño de los componentes pasivos, el aumento de la frecuencia de conmutación puede mejorar significativamente prestaciones dinámicas de convertidores de potencia. Almacenamiento de energía pequeña y el período de conmutación corto conducen a una respuesta transitoria del convertidor más rápida en presencia de las variaciones de la tensión de entrada o de la carga. Las limitaciones más importantes del incremento de la frecuencia de conmutación se relacionan con mayores pérdidas del núcleo magnético convencional, así como las pérdidas de los devanados debido a los efectos pelicular y proximidad. También, un problema potencial es el aumento de los efectos de los elementos parásitos de los componentes magnéticos - inductancia de dispersión y la capacidad entre los devanados - que causan pérdidas adicionales debido a las corrientes no deseadas. Otro factor limitante supone el incremento de las pérdidas de conmutación y el aumento de la influencia de los elementos parásitos (pistas de circuitos impresos, interconexiones y empaquetado) en el comportamiento del circuito. El uso de topologías resonantes puede abordar estos problemas mediante el uso de las técnicas de conmutaciones suaves para reducir las pérdidas de conmutación incorporando los parásitos en los elementos del circuito. Sin embargo, las mejoras de rendimiento se reducen significativamente debido a las corrientes circulantes cuando el convertidor opera fuera de las condiciones de funcionamiento nominales. A medida que la tensión de entrada o la carga cambian las corrientes circulantes incrementan en comparación con aquellos en condiciones de funcionamiento nominales. Se pueden obtener muchos beneficios potenciales de la operación de convertidores resonantes a más alta frecuencia si se emplean en aplicaciones con condiciones de tensión de entrada favorables como las que se encuentran en las arquitecturas de potencia distribuidas. La regulación de la carga y en particular la regulación de la tensión de entrada reducen tanto la densidad de potencia del convertidor como el rendimiento. Debido a la relativamente constante tensión de bus que se encuentra en arquitecturas de potencia distribuidas los convertidores resonantes son adecuados para el uso en convertidores de tipo bus (transformadores cc/cc de estado sólido). En el mercado ya están disponibles productos comerciales de transformadores cc/cc de dos puertos que tienen muy alta densidad de potencia y alto rendimiento se basan en convertidor resonante serie que opera justo en la frecuencia de resonancia y en el orden de los megahercios. Sin embargo, las mejoras futuras en el rendimiento de las arquitecturas de potencia se esperan que vengan del uso de dos o más buses de distribución de baja tensión en vez de una sola. Teniendo eso en cuenta, el objetivo principal de esta tesis es aplicar el concepto del convertidor resonante serie que funciona en su punto óptimo en un nuevo transformador cc/cc bidireccional de puertos múltiples para atender las necesidades futuras de las arquitecturas de potencia. El nuevo transformador cc/cc bidireccional de puertos múltiples se basa en la topología de convertidor resonante serie y reduce a sólo uno el número de componentes magnéticos. Conmutaciones suaves de los interruptores hacen que sea posible la operación en las altas frecuencias de conmutación para alcanzar altas densidades de potencia. Los problemas posibles con respecto a inductancias parásitas se eliminan, ya que se absorben en los Resumen elementos del circuito. El convertidor se caracteriza con una muy buena regulación de la carga propia y cruzada debido a sus pequeñas impedancias de salida intrínsecas. El transformador cc/cc de puertos múltiples opera a una frecuencia de conmutación fija y sin regulación de la tensión de entrada. En esta tesis se analiza de forma teórica y en profundidad el funcionamiento y el diseño de la topología y del transformador, modelándolos en detalle para poder optimizar su diseño. Los resultados experimentales obtenidos se corresponden con gran exactitud a aquellos proporcionados por los modelos. El efecto de los elementos parásitos son críticos y afectan a diferentes aspectos del convertidor, regulación de la tensión de salida, pérdidas de conducción, regulación cruzada, etc. También se obtienen los criterios de diseño para seleccionar los valores de los condensadores de resonancia para lograr diferentes objetivos de diseño, tales como pérdidas de conducción mínimas, la eliminación de la regulación cruzada o conmutación en apagado con corriente cero en plena carga de todos los puentes secundarios. Las conmutaciones en encendido con tensión cero en todos los interruptores se consiguen ajustando el entrehierro para obtener una inductancia magnetizante finita en el transformador. Se propone, además, un cambio en los señales de disparo para conseguir que la operación con conmutaciones en apagado con corriente cero de todos los puentes secundarios sea independiente de la variación de la carga y de las tolerancias de los condensadores resonantes. La viabilidad de la topología propuesta se verifica a través una extensa tarea de simulación y el trabajo experimental. La optimización del diseño del transformador de alta frecuencia también se aborda en este trabajo, ya que es el componente más voluminoso en el convertidor. El impacto de de la duración del tiempo muerto y el tamaño del entrehierro en el rendimiento del convertidor se analizan en un ejemplo de diseño de transformador cc/cc de tres puertos y cientos de vatios de potencia. En la parte final de esta investigación se considera la implementación y el análisis de las prestaciones de un transformador cc/cc de cuatro puertos para una aplicación de muy baja tensión y de decenas de vatios de potencia, y sin requisitos de aislamiento. Abstract Recently, switch mode power supplies (SMPS) have been used in a great variety of applications. The most challenging issue for designers of SMPS is to achieve simultaneously high efficiency operation at high power density. The size and weight of a power converter is dominated by the passive components since these elements are normally larger and heavier than other elements in the circuit. If the output power is constant, the stored amount of energy in the converter which is to be delivered to the load in each switching cycle is inversely proportional to the converter’s switching frequency. Therefore, increasing the switching frequency is considered a mean to achieve more compact solutions at higher power density levels. The importance of investigation in high switching frequency range comes from all the benefits that can be achieved. Besides the reduction in size of passive components, increasing switching frequency can significantly improve dynamic performances of power converters. Small energy storage and short switching period lead to faster transient response of the converter against the input voltage and load variations. The most important limitations for pushing up the switching frequency are related to increased conventional magnetic core loss as well as the winding loss due to the skin and proximity effect. A potential problem is also increased magnetic parasitics – leakage inductance and capacitance between the windings – that cause additional loss due to unwanted currents. Higher switching loss and the increased influence of printed circuit boards, interconnections and packaging on circuit behavior is another limiting factor. Resonant power conversion can address these problems by using soft switching techniques to reduce switching loss incorporating the parasitics into the circuit elements. However the performance gains are significantly reduced due to the circulating currents when the converter operates out of the nominal operating conditions. As the input voltage or the load change the circulating currents become higher comparing to those ones at nominal operating conditions. Multiple Input-Output Many potential gains from operating resonant converters at higher switching frequency can be obtained if they are employed in applications with favorable input voltage conditions such as those found in distributed power architectures. Load and particularly input voltage regulation reduce a converter’s power density and efficiency. Due to a relatively constant bus voltage in distributed power architectures the resonant converters are suitable for bus voltage conversion (dc/dc or solid state transformation). Unregulated two port dc/dc transformer products achieving very high power density and efficiency figures are based on series resonant converter operating just at the resonant frequency and operating in the megahertz range are already available in the market. However, further efficiency improvements of power architectures are expected to come from using two or more separate low voltage distribution buses instead of a single one. The principal objective of this dissertation is to implement the concept of the series resonant converter operating at its optimum point into a novel bidirectional multiple port dc/dc transformer to address the future needs of power architectures. The new multiple port dc/dc transformer is based on a series resonant converter topology and reduces to only one the number of magnetic components. Soft switching commutations make possible high switching frequencies to be adopted and high power densities to be achieved. Possible problems regarding stray inductances are eliminated since they are absorbed into the circuit elements. The converter features very good inherent load and cross regulation due to the small output impedances. The proposed multiple port dc/dc transformer operates at fixed switching frequency without line regulation. Extensive theoretical analysis of the topology and modeling in details are provided in order to compare with the experimental results. The relationships that show how the output voltage regulation and conduction losses are affected by the circuit parasitics are derived. The methods to select the resonant capacitor values to achieve different design goals such as minimum conduction losses, elimination of cross regulation or ZCS operation at full load of all the secondary side bridges are discussed. ZVS turn-on of all the switches is achieved by relying on the finite magnetizing inductance of the Abstract transformer. A change of the driving pattern is proposed to achieve ZCS operation of all the secondary side bridges independent on load variations or resonant capacitor tolerances. The feasibility of the proposed topology is verified through extensive simulation and experimental work. The optimization of the high frequency transformer design is also addressed in this work since it is the most bulky component in the converter. The impact of dead time interval and the gap size on the overall converter efficiency is analyzed on the design example of the three port dc/dc transformer of several hundreds of watts of the output power for high voltage applications. The final part of this research considers the implementation and performance analysis of the four port dc/dc transformer in a low voltage application of tens of watts of the output power and without isolation requirements.
Resumo:
The modal analysis of a structural system consists on computing its vibrational modes. The experimental way to estimate these modes requires to excite the system with a measured or known input and then to measure the system output at different points using sensors. Finally, system inputs and outputs are used to compute the modes of vibration. When the system refers to large structures like buildings or bridges, the tests have to be performed in situ, so it is not possible to measure system inputs such as wind, traffic, . . .Even if a known input is applied, the procedure is usually difficult and expensive, and there are still uncontrolled disturbances acting at the time of the test. These facts led to the idea of computing the modes of vibration using only the measured vibrations and regardless of the inputs that originated them, whether they are ambient vibrations (wind, earthquakes, . . . ) or operational loads (traffic, human loading, . . . ). This procedure is usually called Operational Modal Analysis (OMA), and in general consists on to fit a mathematical model to the measured data assuming the unobserved excitations are realizations of a stationary stochastic process (usually white noise processes). Then, the modes of vibration are computed from the estimated model. The first issue investigated in this thesis is the performance of the Expectation- Maximization (EM) algorithm for the maximum likelihood estimation of the state space model in the field of OMA. The algorithm is described in detail and it is analysed how to apply it to vibration data. After that, it is compared to another well known method, the Stochastic Subspace Identification algorithm. The maximum likelihood estimate enjoys some optimal properties from a statistical point of view what makes it very attractive in practice, but the most remarkable property of the EM algorithm is that it can be used to address a wide range of situations in OMA. In this work, three additional state space models are proposed and estimated using the EM algorithm: • The first model is proposed to estimate the modes of vibration when several tests are performed in the same structural system. Instead of analyse record by record and then compute averages, the EM algorithm is extended for the joint estimation of the proposed state space model using all the available data. • The second state space model is used to estimate the modes of vibration when the number of available sensors is lower than the number of points to be tested. In these cases it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors). Here, the proposed state space model and the EM algorithm are used to estimate the modal parameters taking into account the data of all setups. • And last, a state space model is proposed to estimate the modes of vibration in the presence of unmeasured inputs that cannot be modelled as white noise processes. In these cases, the frequency components of the inputs cannot be separated from the eigenfrequencies of the system, and spurious modes are obtained in the identification process. The idea is to measure the response of the structure corresponding to different inputs; then, it is assumed that the parameters common to all the data correspond to the structure (modes of vibration), and the parameters found in a specific test correspond to the input in that test. The problem is solved using the proposed state space model and the EM algorithm. Resumen El análisis modal de un sistema estructural consiste en calcular sus modos de vibración. Para estimar estos modos experimentalmente es preciso excitar el sistema con entradas conocidas y registrar las salidas del sistema en diferentes puntos por medio de sensores. Finalmente, los modos de vibración se calculan utilizando las entradas y salidas registradas. Cuando el sistema es una gran estructura como un puente o un edificio, los experimentos tienen que realizarse in situ, por lo que no es posible registrar entradas al sistema tales como viento, tráfico, . . . Incluso si se aplica una entrada conocida, el procedimiento suele ser complicado y caro, y todavía están presentes perturbaciones no controladas que excitan el sistema durante el test. Estos hechos han llevado a la idea de calcular los modos de vibración utilizando sólo las vibraciones registradas en la estructura y sin tener en cuenta las cargas que las originan, ya sean cargas ambientales (viento, terremotos, . . . ) o cargas de explotación (tráfico, cargas humanas, . . . ). Este procedimiento se conoce en la literatura especializada como Análisis Modal Operacional, y en general consiste en ajustar un modelo matemático a los datos registrados adoptando la hipótesis de que las excitaciones no conocidas son realizaciones de un proceso estocástico estacionario (generalmente ruido blanco). Posteriormente, los modos de vibración se calculan a partir del modelo estimado. El primer problema que se ha investigado en esta tesis es la utilización de máxima verosimilitud y el algoritmo EM (Expectation-Maximization) para la estimación del modelo espacio de los estados en el ámbito del Análisis Modal Operacional. El algoritmo se describe en detalle y también se analiza como aplicarlo cuando se dispone de datos de vibraciones de una estructura. A continuación se compara con otro método muy conocido, el método de los Subespacios. Los estimadores máximo verosímiles presentan una serie de propiedades que los hacen óptimos desde un punto de vista estadístico, pero la propiedad más destacable del algoritmo EM es que puede utilizarse para resolver un amplio abanico de situaciones que se presentan en el Análisis Modal Operacional. En este trabajo se proponen y estiman tres modelos en el espacio de los estados: • El primer modelo se utiliza para estimar los modos de vibración cuando se dispone de datos correspondientes a varios experimentos realizados en la misma estructura. En lugar de analizar registro a registro y calcular promedios, se utiliza algoritmo EM para la estimación conjunta del modelo propuesto utilizando todos los datos disponibles. • El segundo modelo en el espacio de los estados propuesto se utiliza para estimar los modos de vibración cuando el número de sensores disponibles es menor que vi Resumen el número de puntos que se quieren analizar en la estructura. En estos casos es usual realizar varios ensayos cambiando la posición de los sensores de un ensayo a otro (múltiples configuraciones de sensores). En este trabajo se utiliza el algoritmo EM para estimar los parámetros modales teniendo en cuenta los datos de todas las configuraciones. • Por último, se propone otro modelo en el espacio de los estados para estimar los modos de vibración en la presencia de entradas al sistema que no pueden modelarse como procesos estocásticos de ruido blanco. En estos casos, las frecuencias de las entradas no se pueden separar de las frecuencias del sistema y se obtienen modos espurios en la fase de identificación. La idea es registrar la respuesta de la estructura correspondiente a diferentes entradas; entonces se adopta la hipótesis de que los parámetros comunes a todos los registros corresponden a la estructura (modos de vibración), y los parámetros encontrados en un registro específico corresponden a la entrada en dicho ensayo. El problema se resuelve utilizando el modelo propuesto y el algoritmo EM.