887 resultados para very strict Hurwitz
Resumo:
OBJECTIVE In patients with a long life expectancy with high-risk (HR) prostate cancer (PCa), the chance to die from PCa is not negligible and may change significantly according to the time elapsed from surgery. The aim of this study was to evaluate long-term survival patterns in young patients treated with radical prostatectomy (RP) for HRPCa. MATERIALS AND METHODS Within a multiinstitutional cohort, 600 young patients (≤59 years) treated with RP between 1987 and 2012 for HRPCa (defined as at least one of the following adverse characteristics: prostate specific antigen>20, cT3 or higher, biopsy Gleason sum 8-10) were identified. Smoothed cumulative incidence plot was performed to assess cancer-specific mortality (CSM) and other cause mortality (OCM) rates at 10, 15, and 20 years after RP. The same analyses were performed to assess the 5-year probability of CSM and OCM in patients who survived 5, 10, and 15 years after RP. A multivariable competing risk regression model was fitted to identify predictors of CSM and OCM. RESULTS The 10-, 15- and 20-year CSM and OCM rates were 11.6% and 5.5% vs. 15.5% and 13.5% vs. 18.4% and 19.3%, respectively. The 5-year probability of CSM and OCM rates among patients who survived at 5, 10, and 15 years after RP, were 6.4% and 2.7% vs. 4.6% and 9.6% vs. 4.2% and 8.2%, respectively. Year of surgery, pathological stage and Gleason score, surgical margin status and lymph node invasion were the major determinants of CSM (all P≤0.03). Conversely, none of the covariates was significantly associated with OCM (all P≥ 0.09). CONCLUSIONS Very long-term cancer control in young high-risk patients after RP is highly satisfactory. The probability of dying from PCa in young patients is the leading cause of death during the first 10 years of survivorship after RP. Thereafter, mortality not related to PCa became the main cause of death. Consequently, surgery should be consider among young patients with high-risk disease and strict PCa follow-up should enforce during the first 10 years of survivorship after RP.
A methodology to analyze, design and implement very fast and robust controls of Buck-type converters
Resumo:
La electrónica digital moderna presenta un desafío a los diseñadores de sistemas de potencia. El creciente alto rendimiento de microprocesadores, FPGAs y ASICs necesitan sistemas de alimentación que cumplan con requirimientos dinámicos y estáticos muy estrictos. Específicamente, estas alimentaciones son convertidores DC-DC de baja tensión y alta corriente que necesitan ser diseñados para tener un pequeño rizado de tensión y una pequeña desviación de tensión de salida bajo transitorios de carga de una alta pendiente. Además, dependiendo de la aplicación, se necesita cumplir con otros requerimientos tal y como proveer a la carga con ”Escalado dinámico de tensión”, donde el convertidor necesitar cambiar su tensión de salida tan rápidamente posible sin sobreoscilaciones, o ”Posicionado Adaptativo de la Tensión” donde la tensión de salida se reduce ligeramente cuanto más grande sea la potencia de salida. Por supuesto, desde el punto de vista de la industria, las figuras de mérito de estos convertidores son el coste, la eficiencia y el tamaño/peso. Idealmente, la industria necesita un convertidor que es más barato, más eficiente, más pequeño y que aún así cumpla con los requerimienos dinámicos de la aplicación. En este contexto, varios enfoques para mejorar la figuras de mérito de estos convertidores se han seguido por la industria y la academia tales como mejorar la topología del convertidor, mejorar la tecnología de semiconducores y mejorar el control. En efecto, el control es una parte fundamental en estas aplicaciones ya que un control muy rápido hace que sea más fácil que una determinada topología cumpla con los estrictos requerimientos dinámicos y, consecuentemente, le da al diseñador un margen de libertar más amplio para mejorar el coste, la eficiencia y/o el tamaño del sistema de potencia. En esta tesis, se investiga cómo diseñar e implementar controles muy rápidos para el convertidor tipo Buck. En esta tesis se demuestra que medir la tensión de salida es todo lo que se necesita para lograr una respuesta casi óptima y se propone una guía de diseño unificada para controles que sólo miden la tensión de salida Luego, para asegurar robustez en controles muy rápidos, se proponen un modelado y un análisis de estabilidad muy precisos de convertidores DC-DC que tienen en cuenta circuitería para sensado y elementos parásitos críticos. También, usando este modelado, se propone una algoritmo de optimización que tiene en cuenta las tolerancias de los componentes y sensados distorsionados. Us ando este algoritmo, se comparan controles muy rápidos del estado del arte y su capacidad para lograr una rápida respuesta dinámica se posiciona según el condensador de salida utilizado. Además, se propone una técnica para mejorar la respuesta dinámica de los controladores. Todas las propuestas se han corroborado por extensas simulaciones y prototipos experimentales. Con todo, esta tesis sirve como una metodología para ingenieros para diseñar e implementar controles rápidos y robustos de convertidores tipo Buck. ABSTRACT Modern digital electronics present a challenge to designers of power systems. The increasingly high-performance of microprocessors, FPGAs (Field Programmable Gate Array) and ASICs (Application-Specific Integrated Circuit) require power supplies to comply with very demanding static and dynamic requirements. Specifically, these power supplies are low-voltage/high-current DC-DC converters that need to be designed to exhibit low voltage ripple and low voltage deviation under high slew-rate load transients. Additionally, depending on the application, other requirements need to be met such as to provide to the load ”Dynamic Voltage Scaling” (DVS), where the converter needs to change the output voltage as fast as possible without underdamping, or ”Adaptive Voltage Positioning” (AVP) where the output voltage is slightly reduced the greater the output power. Of course, from the point of view of the industry, the figures of merit of these converters are the cost, efficiency and size/weight. Ideally, the industry needs a converter that is cheaper, more efficient, smaller and that can still meet the dynamic requirements of the application. In this context, several approaches to improve the figures of merit of these power supplies are followed in the industry and academia such as improving the topology of the converter, improving the semiconductor technology and improving the control. Indeed, the control is a fundamental part in these applications as a very fast control makes it easier for the topology to comply with the strict dynamic requirements and, consequently, gives the designer a larger margin of freedom to improve the cost, efficiency and/or size of the power supply. In this thesis, how to design and implement very fast controls for the Buck converter is investigated. This thesis proves that sensing the output voltage is all that is needed to achieve an almost time-optimal response and a unified design guideline for controls that only sense the output voltage is proposed. Then, in order to assure robustness in very fast controls, a very accurate modeling and stability analysis of DC-DC converters is proposed that takes into account sensing networks and critical parasitic elements. Also, using this modeling approach, an optimization algorithm that takes into account tolerances of components and distorted measurements is proposed. With the use of the algorithm, very fast analog controls of the state-of-the-art are compared and their capabilities to achieve a fast dynamic response are positioned de pending on the output capacitor. Additionally, a technique to improve the dynamic response of controllers is also proposed. All the proposals are corroborated by extensive simulations and experimental prototypes. Overall, this thesis serves as a methodology for engineers to design and implement fast and robust controls for Buck-type converters.
Resumo:
Executive function (EF) emerges in infancy and continues to develop throughout childhood. Executive dysfunction is believed to contribute to learning and attention problems in children at school age. Children born very preterm are more prone to these problems than their full-term peers.
Resumo:
In this paper we describe the development of a three-dimensional (3D) imaging system for a 3500 tonne mining machine (dragline).Draglines are large walking cranes used for removing the dirt that covers a coal seam. Our group has been developing a dragline swing automation system since 1994. The system so far has been `blind' to its external environment. The work presented in this paper attempts to give the dragline an ability to sense its surroundings. A 3D digital terrain map (DTM) is created from data obtained from a two-dimensional laser scanner while the dragline swings. Experimental data from an operational dragline are presented.
Resumo:
This thesis presents an original approach to parametric speech coding at rates below 1 kbitsjsec, primarily for speech storage applications. Essential processes considered in this research encompass efficient characterization of evolutionary configuration of vocal tract to follow phonemic features with high fidelity, representation of speech excitation using minimal parameters with minor degradation in naturalness of synthesized speech, and finally, quantization of resulting parameters at the nominated rates. For encoding speech spectral features, a new method relying on Temporal Decomposition (TD) is developed which efficiently compresses spectral information through interpolation between most steady points over time trajectories of spectral parameters using a new basis function. The compression ratio provided by the method is independent of the updating rate of the feature vectors, hence allows high resolution in tracking significant temporal variations of speech formants with no effect on the spectral data rate. Accordingly, regardless of the quantization technique employed, the method yields a high compression ratio without sacrificing speech intelligibility. Several new techniques for improving performance of the interpolation of spectral parameters through phonetically-based analysis are proposed and implemented in this research, comprising event approximated TD, near-optimal shaping event approximating functions, efficient speech parametrization for TD on the basis of an extensive investigation originally reported in this thesis, and a hierarchical error minimization algorithm for decomposition of feature parameters which significantly reduces the complexity of the interpolation process. Speech excitation in this work is characterized based on a novel Multi-Band Excitation paradigm which accurately determines the harmonic structure in the LPC (linear predictive coding) residual spectra, within individual bands, using the concept 11 of Instantaneous Frequency (IF) estimation in frequency domain. The model yields aneffective two-band approximation to excitation and computes pitch and voicing with high accuracy as well. New methods for interpolative coding of pitch and gain contours are also developed in this thesis. For pitch, relying on the correlation between phonetic evolution and pitch variations during voiced speech segments, TD is employed to interpolate the pitch contour between critical points introduced by event centroids. This compresses pitch contour in the ratio of about 1/10 with negligible error. To approximate gain contour, a set of uniformly-distributed Gaussian event-like functions is used which reduces the amount of gain information to about 1/6 with acceptable accuracy. The thesis also addresses a new quantization method applied to spectral features on the basis of statistical properties and spectral sensitivity of spectral parameters extracted from TD-based analysis. The experimental results show that good quality speech, comparable to that of conventional coders at rates over 2 kbits/sec, can be achieved at rates 650-990 bits/sec.
Resumo:
Nationally, there is much legislation regulating land sale transactions, particularly in relation to seller disclosure of information. The statutes require strict compliance by a seller failing which, in general, a buyer can terminate the contract. In a number of instances, when buyers have sought to exercise these rights, sellers have alleged that buyers have either expressly or by conduct waived their rights to rely upon these statutes. This article examines the nature of these rights in this context, whether they are capable of waiver and, if so, what words or conduct might be sufficient to amount to waiver. The analysis finds that the law is in a very unsatisfactory state, that the operation of those rules that can be identified as having relevance are unevenly applied and concludes that sellers have, in the main, been unsuccessful in defeating buyers' statutory rights as a result of an alleged waiver by those buyers.
Resumo:
Foreword: In this paper I call upon a praxiological approach. Praxeology (early alteration of praxiology) is the study of human action and conduct. The name praxeology/praxiologyakes is root in praxis, Medieval Latin, from Greek, doing, action, from prassein to do, practice (Merriam-Webster Dictionary). Having been involved in project management education, research and practice for the last twenty years, I have constantly tried to improve and to provide a better understanding/knowledge of the field and related practice, and as a consequence widen and deepen the competencies of the people I was working with (and my own competencies as well!), assuming that better project management lead to more efficient and effective use of resources, development of people and at the end to a better world. For some time I have perceived a need to clarify the foundations of the discipline of project management, or at least elucidate what these foundations could be. An immodest task, one might say! But not a neutral one! I am constantly surprised by the way the world (i.e., organizations, universities, students and professional bodies) sees project management: as a set of methods, techniques, tools, interacting with others fields – general management, engineering, construction, information systems, etc. – bringing some effective ways of dealing with various sets of problems – from launching a new satellite to product development through to organizational change.
Resumo:
A practical method for the design of dual-band decoupling and matching networks (DMN) for two closely spaced antennas using discrete components is presented. The DMN reduces the port-to-port coupling and enhances the diversity of the antennas. By applying the DMN, the radiation efficiency can also be improved when one port is fed and the other port is match terminated. The proposed DMN works at two frequencies simultaneously without the need for any switch. As a proof of concept, a dual-band DMN for a pair of monopoles spaced 0.05λ apart is designed. The measured return loss and port isolation exceed 10 dB from 1.71 GHz to 1.76 GHz and from 2.27 GHz to 2.32 GHz.
Resumo:
Non-state insurgent actors are too weak to compel powerful adversaries to their will, so they use violence to coerce. A principal objective is to grow and sustain violent resistance to the point that it either militarily challenges the state, or more commonly, generates unacceptable political costs. To survive, insurgents must shift popular support away from the state and to grow they must secure it. State actor policies and actions perceived as illegitimate and oppressive by the insurgent constituency can generate these shifts. A promising insurgent strategy is to attack states in ways that lead angry publics and leaders to discount the historically established risks and take flawed but popular decisions to use repressive measures. Such decisions may be enabled by a visceral belief in the power of coercion and selective use of examples of where robust measures have indeed suppressed resistance. To avoid such counterproductive behaviours the cases of apparent 'successful repression' must be understood. This thesis tests whether robust state action is correlated with reduced support for insurgents, analyses the causal mechanisms of such shifts and examines whether such reduction is because of compulsion or coercion? The approach is founded on prior research by the RAND Corporation which analysed the 30 insurgencies most recently resolved worldwide to determine factors of counterinsurgent success. This new study first re-analyses their data at a finer resolution with new queries that investigate the relationship between repression and insurgent active support. Having determined that, in general, repression does not correlate with decreased insurgent support, this study then analyses two cases in which the data suggests repression seems likely to be reducing insurgent support: the PKK in Turkey and the insurgency against the Vietnamese-sponsored regime after their ousting of the Khmer Rouge. It applies 'structured-focused' case analysis with questions partly built from the insurgency model of Leites and Wolf, who are associated with the advocacy of US robust means in Vietnam. This is thus a test of 'most difficult' cases using a 'least likely' test model. Nevertheless, the findings refute the deterrence argument of 'iron fist' advocates. Robust approaches may physically prevent effective support of insurgents but they do not coercively deter people from being willing to actively support the insurgency.
Resumo:
The purpose of this study was to investigate the effect of very small air gaps (less than 1 mm) on the dosimetry of small photon fields used for stereotactic treatments. Measurements were performed with optically stimulated luminescent dosimeters (OSLDs) for 6 MV photons on a Varian 21iX linear accelerator with a Brainlab μMLC attachment for square field sizes down to 6 mm × 6 mm. Monte Carlo simulations were performed using EGSnrc C++ user code cavity. It was found that the Monte Carlo model used in this study accurately simulated the OSLD measurements on the linear accelerator. For the 6 mm field size, the 0.5 mm air gap upstream to the active area of the OSLD caused a 5.3 % dose reduction relative to a Monte Carlo simulation with no air gap. A hypothetical 0.2 mm air gap caused a dose reduction > 2 %, emphasizing the fact that even the tiniest air gaps can cause a large reduction in measured dose. The negligible effect on an 18 mm field size illustrated that the electronic disequilibrium caused by such small air gaps only affects the dosimetry of the very small fields. When performing small field dosimetry, care must be taken to avoid any air gaps, as can be often present when inserting detectors into solid phantoms. It is recommended that very small field dosimetry is performed in liquid water. When using small photon fields, sub-millimetre air gaps can also affect patient dosimetry if they cannot be spatially resolved on a CT scan. However the effect on the patient is debatable as the dose reduction caused by a 1 mm air gap, starting out at 19% in the first 0.1 mm behind the air gap, decreases to < 5 % after just 2 mm, and electronic equilibrium is fully re-established after just 5 mm.
Resumo:
Client owners usually need an estimate or forecast of their likely building costs in advance of detailed design in order to confirm the financial feasibility of their projects. Because of their timing in the project life cycle, these early stage forecasts are characterized by the minimal amount of information available concerning the new (target) project to the point that often only its size and type are known. One approach is to use the mean contract sum of a sample, or base group, of previous projects of a similar type and size to the project for which the estimate is needed. Bernoulli’s law of large numbers implies that this base group should be as large as possible. However, increasing the size of the base group inevitably involves including projects that are less and less similar to the target project. Deciding on the optimal number of base group projects is known as the homogeneity or pooling problem. A method of solving the homogeneity problem is described involving the use of closed form equations to compare three different sampling arrangements of previous projects for their simulated forecasting ability by a cross-validation method, where a series of targets are extracted, with replacement, from the groups and compared with the mean value of the projects in the base groups. The procedure is then demonstrated with 450 Hong Kong projects (with different project types: Residential, Commercial centre, Car parking, Social community centre, School, Office, Hotel, Industrial, University and Hospital) clustered into base groups according to their type and size.
Resumo:
This article suggests that the issue of proportionality in anti-doping sanctions has been inconsistently dealt with by the Court of Arbitration for Sport (CAS). Given CAS’s pre-eminent role in interpreting and applying the World Anti-Doping Code under the anti-doping policies of its signatories, an inconsistent approach to the application of the proportionality principle will cause difficulties for domestic anti-doping tribunals seeking guidance as to the appropriateness of their doping sanctions.