798 resultados para Penalty Clause
Resumo:
Mexico and the European Union signed a new Political and Economic Association Agreement in December 1997 and ultimately a free-trade agreement in March 2000, aiming to establish a new model of relations with a more dynamic trade and investment component. This article analyzes the 1997 agreement as background to the final accord. Economic and political changes in the 1990s modified both parties' participation in the international political economy, helping to overcome some of the structural obstacles to the relationship. The policy toward Latin America adopted by the EU in 1994 was influential. The negotiation process revealed divergences over the scope of the liberalization process and the so-called democracy clause.
Resumo:
Nonlinear distortion in delay-compensated spans for intermediate coupling is studied for the first time. Coupling strengths under -30dB/100m allow distortion reduction using shorter compensation lengths and higher delays. For higher coupling strengths no significant penalty results from shorter compensation lengths.
Resumo:
We experimentally demonstrate 7-dB reduction of nonlinearity penalty in 40-Gb/s CO-OFDM at 2000-km using support vector machine regression-based equalization. Simulation in WDM-CO-OFDM shows up to 12-dB enhancement in Q-factor compared to linear equalization.
Resumo:
Abstract
The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.
This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.
I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.
Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.
II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.
The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.
In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.
Resumo:
Making decisions is fundamental to everything we do, yet it can be impaired in various disorders and conditions. While research into the neural basis of decision-making has flourished in recent years, many questions remain about how decisions are instantiated in the brain. Here we explored how primates make abstract decisions and decisions in social contexts, as well as one way to non-invasively modulate the brain circuits underlying decision-making. We used rhesus macaques as our model organism. First we probed numerical decision-making, a form of abstract decision-making. We demonstrated that monkeys are able to compare discrete ratios, choosing an array with a greater ratio of positive to negative stimuli, even when this array does not have a greater absolute number of positive stimuli. Monkeys’ performance in this task adhered to Weber’s law, indicating that monkeys—like humans—treat proportions as analog magnitudes. Next we showed that monkeys’ ordinal decisions are influenced by spatial associations; when trained to select the fourth stimulus from the bottom in a vertical array, they subsequently selected the fourth stimulus from the left—and not from the right—in a horizontal array. In other words, they begin enumerating from one side of space and not the other, mirroring the human tendency to associate numbers with space. These and other studies confirmed that monkeys’ numerical decision-making follows similar patterns to that of humans, making them a good model for investigations of the neurobiological basis of numerical decision-making.
We sought to develop a system for exploring the neuronal basis of the cognitive and behavioral effects observed following transcranial magnetic stimulation, a relatively new, non-invasive method of brain stimulation that may be used to treat clinical disorders. We completed a set of pilot studies applying offline low-frequency repetitive transcranial magnetic stimulation to the macaque posterior parietal cortex, which has been implicated in numerical processing, while subjects performed a numerical comparison and control color comparison task, and while electrophysiological activity was recorded from the stimulated region of cortex. We found tentative evidence in one paradigm that stimulation did selectively impair performance in the number task, causally implicating the posterior parietal cortex in numerical decisions. In another paradigm, however, we manipulated the subject’s reaching behavior but not her number or color comparison performance. We also found that stimulation produced variable changes in neuronal firing and local field potentials. Together these findings lay the groundwork for detailed investigations into how different parameters of transcranial magnetic stimulation can interact with cortical architecture to produce various cognitive and behavioral changes.
Finally, we explored how monkeys decide how to behave in competitive social interactions. In a zero-sum computer game in which two monkeys played as a shooter or a goalie during a hockey-like “penalty shot” scenario, we found that shooters developed complex movement trajectories so as to conceal their intentions from the goalies. Additionally, we found that neurons in the dorsolateral and dorsomedial prefrontal cortex played a role in generating this “deceptive” behavior. We conclude that these regions of prefrontal cortex form part of a circuit that guides decisions to make an individual less predictable to an opponent.
Resumo:
Carbon Capture and Storage (CCS) technologies provide a means to significantly reduce carbon emissions from the existing fleet of fossil-fired plants, and hence can facilitate a gradual transition from conventional to more sustainable sources of electric power. This is especially relevant for coal plants that have a CO2 emission rate that is roughly two times higher than that of natural gas plants. Of the different kinds of CCS technology available, post-combustion amine based CCS is the best developed and hence more suitable for retrofitting an existing coal plant. The high costs from operating CCS could be reduced by enabling flexible operation through amine storage or allowing partial capture of CO2 during high electricity prices. This flexibility is also found to improve the power plant’s ramp capability, enabling it to offset the intermittency of renewable power sources. This thesis proposes a solution to problems associated with two promising technologies for decarbonizing the electric power system: the high costs of the energy penalty of CCS, and the intermittency and non-dispatchability of wind power. It explores the economic and technical feasibility of a hybrid system consisting of a coal plant retrofitted with a post-combustion-amine based CCS system equipped with the option to perform partial capture or amine storage, and a co-located wind farm. A techno-economic assessment of the performance of the hybrid system is carried out both from the perspective of the stakeholders (utility owners, investors, etc.) as well as that of the power system operator.
In order to perform the assessment from the perspective of the facility owners (e.g., electric power utilities, independent power producers), an optimal design and operating strategy of the hybrid system is determined for both the amine storage and partial capture configurations. A linear optimization model is developed to determine the optimal component sizes for the hybrid system and capture rates while meeting constraints on annual average emission targets of CO2, and variability of the combined power output. Results indicate that there are economic benefits of flexible operation relative to conventional CCS, and demonstrate that the hybrid system could operate as an energy storage system: providing an effective pathway for wind power integration as well as a mechanism to mute the variability of intermittent wind power.
In order to assess the performance of the hybrid system from the perspective of the system operator, a modified Unit Commitment/ Economic Dispatch model is built to consider and represent the techno-economic aspects of operation of the hybrid system within a power grid. The hybrid system is found to be effective in helping the power system meet an average CO2 emissions limit equivalent to the CO2 emission rate of a state-of-the-art natural gas plant, and to reduce power system operation costs and number of instances and magnitude of energy and reserve scarcity.
Resumo:
Background: It is well documented that children with Specific Language Impairment (SLI) experience significant grammatical deficits. While much of the focus in the past has been on their morphosyntactic difficulties, less is known about their acquisition of complex syntactic structures such as relative clauses. The role of memory in language performance has also become increasingly prominent in the literature. Aims: This study aims to investigate the control of an important complex syntactic structure, the relative clause, by school age children with SLI in Ireland, using a newly devised sentence recall task. It also aims to explore the role of verbal and short-termworking memory in the performance of children with SLI on the sentence recall task, using a standardized battery of tests based on Baddeley’s model of working memory. Methods and Procedures: Thirty two children with SLI, thirty two age matched typically developing children (AM-TD) between the ages of 6 and 7,11 years and twenty younger typically developing (YTD) children between 4,7 and 5 years, completed the task. The sentence recall (SR) task included 52 complex sentences and 17 fillers. It included relative clauses that are used in natural discourse and that reflect a developmental hierarchy. The relative clauses were also controlled for length and varied in syntactic complexity, representing the full range of syntactic roles. There were seven different relative clause types attached to either the predicate nominal of a copular clause (Pn), or to the direct object of a transitive clause (Do). Responses were recorded, transcribed and entered into a database for analysis. TheWorkingMemory Test Battery for children (WMTB-C—Pickering & Gathercole, 2001) was administered in order to explore the role of short-term memory and working memory on the children’s performance on the SR task. Outcomes and Results: The children with SLI showed significantly greater difficulty than the AM-TD group and the YTD group. With the exception of the genitive subject clauses, the children with SLI scored significantly higher on all sentences containing a Pn main clause than those containing a transitive main clause. Analysis of error types revealed the frequent production of a different type of relative clause than that presented in the task—with a strong word order preference in the NVN direction indicated for the children with SLI. The SR performance for the children with SLI was most highly correlated with expressive language skills and digit recall. Conclusions and Implications: Children with SLI have significantly greater difficulty with relative clauses than YTD children who are on average two years younger—relative clauses are a delay within a delay. Unlike the YTD children they show a tendency to simplify relative clauses in the noun verb noun (NVN) direction. They show a developmental hierarchy in their production of relative clause constructions and are highly influenced by the frequency distribution of the relative clauses in the ambient language.
Resumo:
Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.
Resumo:
There is a growing literature which documents the importance of early life environment for outcomes across the life cycle. Research, including studies based on Irish data, demonstrates that those who experience better childhood conditions go on to be wealthier and healthier adults. Therefore, inequalities at birth and in childhood shape inequality in wellbeing in later life, and the historical evolution of the mortality and morbidity of children born in Ireland is important for understanding the current status of the Irish population. In this paper, I describe these patterns by reviewing the existing literature on infant health in Ireland over the course of the 20th century. Up to the 1950s, infant mortality in Ireland (both North and South) was substantially higher than in other developed countries, with a large penalty for those born in urban areas. The subsequent reduction in this penalty, and the sustained decline in infant death rates, occurred later than would be expected from the experience in other contexts. Using records from the Rotunda Lying-in Hospital in Dublin, I discuss sources of disparities in stillbirth in the early 1900s. Despite impressive improvements in death rates since that time, a comparison with those born at the end of the century reveals that Irish children continue to be born unequal. Evidence from studies which track people across the life course, for example research on the returns to birthweight, suggests that the economic cost of this early life inequality is substantial.
Resumo:
Three-dimensional printing (“3DP”) is an additive manufacturing technology that starts with a virtual 3D model of the object to be printed, the so-called Computer-Aided-Design (“CAD”) file. This file, when sent to the printer, gives instructions to the device on how to build the object layer-by-layer. This paper explores whether design protection is available under the current European regulatory framework for designs that are computer-created by means of CAD software, and, if so, under what circumstances. The key point is whether the appearance of a product, embedded in a CAD file, could be regarded as a protectable element under existing legislation. To this end, it begins with an inquiry into the concepts of “design” and “product”, set forth in Article 3 of the Community Design Regulation No. 6/2002 (“CDR”). Then, it considers the EUIPO’s practice of accepting 3D digital representations of designs. The enquiry goes on to illustrate the implications that the making of a CAD file available online might have. It suggests that the act of uploading a CAD file onto a 3D printing platform may be tantamount to a disclosure for the purposes of triggering unregistered design protection, and for appraising the state of the prior art. It also argues that, when measuring the individual character requirement, the notion of “informed user” and “the designer’s degree of freedom” may need to be reconsidered in the future. The following part touches on the exceptions to design protection, with a special focus on the repairs clause set forth in Article 110 CDR. The concluding part explores different measures that may be implemented to prohibit the unauthorised creation and sharing of CAD files embedding design-protected products.
Resumo:
Structural Health Monitoring (SHM) is an emerging area of research associated to improvement of maintainability and the safety of aerospace, civil and mechanical infrastructures by means of monitoring and damage detection. Guided wave structural testing method is an approach for health monitoring of plate-like structures using smart material piezoelectric transducers. Among many kinds of transducers, the ones that have beam steering feature can perform more accurate surface interrogation. A frequency steerable acoustic transducer (FSATs) is capable of beam steering by varying the input frequency and consequently can detect and localize damage in structures. Guided wave inspection is typically performed through phased arrays which feature a large number of piezoelectric transducers, complexity and limitations. To overcome the weight penalty, the complex circuity and maintenance concern associated with wiring a large number of transducers, new FSATs are proposed that present inherent directional capabilities when generating and sensing elastic waves. The first generation of Spiral FSAT has two main limitations. First, waves are excited or sensed in one direction and in the opposite one (180 ̊ ambiguity) and second, just a relatively rude approximation of the desired directivity has been attained. Second generation of Spiral FSAT is proposed to overcome the first generation limitations. The importance of simulation tools becomes higher when a new idea is proposed and starts to be developed. The shaped transducer concept, especially the second generation of spiral FSAT is a novel idea in guided waves based of Structural Health Monitoring systems, hence finding a simulation tool is a necessity to develop various design aspects of this innovative transducer. In this work, the numerical simulation of the 1st and 2nd generations of Spiral FSAT has been conducted to prove the directional capability of excited guided waves through a plate-like structure.
Resumo:
There is a rich history of social science research centering on racial inequalities that continue to be observed across various markets (e.g., labor, housing, and credit markets) and social milieus. Existing research on racial discrimination in consumer markets, however, is relatively scarce and that which has been done has disproportionately focused on consumers as the victims of race-based mistreatment. As such, we know relatively little about how consumers contribute to inequalities in their roles as perpetrators of racial discrimination. In response, in this paper we elaborate on a line of research that is only in its’ infancy stages of development and yet is ripe with opportunities to advance the literature on consumer racial discrimination and racial earnings inequities among tip dependent employees in the United States. Specifically, we analyze data derived from a large exit survey of restaurant consumers (n=378) in an attempt to replicate, extend, and further explore the recently documented effect of service providers’ race on restaurant consumers’ tipping decisions. Our results indicate that both White and Black restaurant customers discriminate against Black servers by tipping them less than their White coworkers. Importantly, we find no evidence that this Black tip penalty is the result of interracial differences in service skills possessed by Black and White servers. We conclude by delineating directions for future research in this neglected but salient area study.
Resumo:
When a court imposes a fine or forfeiture for a violation of state law, or city or county ordinance, except an ordinance regulating the parking of motor vehicles, the court or the clerk of the district court shall assess an additional penalty in the form of a criminal penalty surcharge equal to thirty-five percent of the fine or forfeiture imposed.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08