902 resultados para two stage quantile regression


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report a two-stage diode-pumped Er-doped fiber amplifier operating at the wavelength of 1550 nm at the repetition rate of 10-100 kHz with an average output power of up to 10 W. The first stage comprising Er-doped fiber was core-pumped at the wavelength of 1480 nm, whereas the second stage comprising double-clad Er/Yb-doped fiber was clad-pumped at the wavelength of 975 nm. The estimated peak power for the 0.4-nm full-width at half-maximum laser emission at the wavelength of 1550 nm exceeded 4-kW level. The initial 100-ns seed diode laser pulse was compressed to 3.5 ns as a result of the 34-dB total amplification. The observed 30-fold efficient pulse compression reveals a promising new nonlinear optical technique for the generation of high power short pulses for applications in eye-safe ranging and micromachining.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

¿What have we learnt from the 2006-2012 crisis, including events such as the subprime crisis, the bankruptcy of Lehman Brothers or the European sovereign debt crisis, among others? It is usually assumed that in firms that have a CDS quotation, this CDS is the key factor in establishing the credit premiumrisk for a new financial asset. Thus, the CDS is a key element for any investor in taking relative value opportunities across a firm’s capital structure. In the first chapter we study the most relevant aspects of the microstructure of the CDS market in terms of pricing, to have a clear idea of how this market works. We consider that such an analysis is a necessary point for establishing a solid base for the rest of the chapters in order to carry out the different empirical studies we perform. In its document “Basel III: A global regulatory framework for more resilient banks and banking systems”, Basel sets the requirement of a capital charge for credit valuation adjustment (CVA) risk in the trading book and its methodology for the computation for the capital requirement. This regulatory requirement has added extra pressure for in-depth knowledge of the CDS market and this motivates the analysis performed in this thesis. The problem arises in estimating of the credit risk premium for those counterparties without a directly quoted CDS in the market. How can we estimate the credit spread for an issuer without CDS? In addition to this, given the high volatility period in the credit market in the last few years and, in particular, after the default of Lehman Brothers on 15 September 2008, we observe the presence of big outliers in the distribution of credit spread in the different combinations of rating, industry and region. After an exhaustive analysis of the results from the different models studied, we have reached the following conclusions. It is clear that hierarchical regression models fit the data much better than those of non-hierarchical regression. Furthermore,we generally prefer the median model (50%-quantile regression) to the mean model (standard OLS regression) due to its robustness when assigning the price to a new credit asset without spread,minimizing the “inversion problem”. Finally, an additional fundamental reason to prefer the median model is the typical "right skewness" distribution of CDS spreads...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master, Chemical Engineering) -- Queen's University, 2016-08-16 04:58:55.749

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Process systems design, operation and synthesis problems under uncertainty can readily be formulated as two-stage stochastic mixed-integer linear and nonlinear (nonconvex) programming (MILP and MINLP) problems. These problems, with a scenario based formulation, lead to large-scale MILPs/MINLPs that are well structured. The first part of the thesis proposes a new finitely convergent cross decomposition method (CD), where Benders decomposition (BD) and Dantzig-Wolfe decomposition (DWD) are combined in a unified framework to improve the solution of scenario based two-stage stochastic MILPs. This method alternates between DWD iterations and BD iterations, where DWD restricted master problems and BD primal problems yield a sequence of upper bounds, and BD relaxed master problems yield a sequence of lower bounds. A variant of CD, which includes multiple columns per iteration of DW restricted master problem and multiple cuts per iteration of BD relaxed master problem, called multicolumn-multicut CD is then developed to improve solution time. Finally, an extended cross decomposition method (ECD) for solving two-stage stochastic programs with risk constraints is proposed. In this approach, a CD approach at the first level and DWD at a second level is used to solve the original problem to optimality. ECD has a computational advantage over a bilevel decomposition strategy or solving the monolith problem using an MILP solver. The second part of the thesis develops a joint decomposition approach combining Lagrangian decomposition (LD) and generalized Benders decomposition (GBD), to efficiently solve stochastic mixed-integer nonlinear nonconvex programming problems to global optimality, without the need for explicit branch and bound search. In this approach, LD subproblems and GBD subproblems are systematically solved in a single framework. The relaxed master problem obtained from the reformulation of the original problem, is solved only when necessary. A convexification of the relaxed master problem and a domain reduction procedure are integrated into the decomposition framework to improve solution efficiency. Using case studies taken from renewable resource and fossil-fuel based application in process systems engineering, it can be seen that these novel decomposition approaches have significant benefit over classical decomposition methods and state-of-the-art MILP/MINLP global optimization solvers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates how textbook design may influence students’ visual attention to graphics, photos and text in current geography textbooks. Eye tracking, a visual method of data collection and analysis, was utilised to precisely monitor students’ eye movements while observing geography textbook spreads. In an exploratory study utilising random sampling, the eye movements of 20 students (secondary school students 15–17 years of age and university students 20–24 years of age) were recorded. The research entities were double-page spreads of current German geography textbooks covering an identical topic, taken from five separate textbooks. A two-stage test was developed. Each participant was given the task of first looking at the entire textbook spread to determine what was being explained on the pages. In the second stage, participants solved one of the tasks from the exercise section. Overall, each participant studied five different textbook spreads and completed five set tasks. After the eye tracking study, each participant completed a questionnaire. The results may verify textbook design as one crucial factor for successful knowledge acquisition from textbooks. Based on the eye tracking documentation, learning-related challenges posed by images and complex image-text structures in textbooks are elucidated and related to educational psychology insights and findings from visual communication and textbook analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Single stage and two-stage sodium sulfite cooking were carried out on either spruce, pine or pure pine heartwood chips to investigate the influence of several process parameters on the initial phase of such a cook down to about 60 % pulp yield. The cooking experiments were carried out in the laboratory with either a lab-prepared or a mill-prepared cooking acid and the temperature and time were varied. The influences of dissolved organic and inorganic components in the cooking liquor on the final pulp composition and on the extent of side reactions were investigated. Kinetic equations were developed and the activation energies for delignification and carbohydrate dissolution were calculated using the Arrhenius equation. A better understanding of the delignification mechanisms during bisulfite and acid sulfite cooking was obtained by analyzing the lignin carbohydrate complexes (LCC) present in the pulp when different cooking conditions were used. It was found that using a mill-prepared cooking acid beneficial effect with respect to side reactions, extractives removal and higher stability in pH during the cook were observed compared to a lab-prepared cooking acid. However, no significant difference in degrees of delignification or carbohydrate degradation was seen.  The cellulose yield was not affected in the initial phase of the cook however; temperature had an influence on the rates of both delignification and hemicellulose removal. It was also found that the  corresponding activation energies increased in the order:  xylan, glucomannan, lignin and cellulose. The cooking temperature could thus be used to control the cook to a given carbohydrate composition in the final pulp. Lignin condensation reactions were observed during acid sulfite cooking, especially at higher temperatures. The LCC studies indicated the existence of covalent bonds between lignin and hemicellulose components with respect to xylan and glucomannan. LCC in native wood showed the presence of phenyl glycosides, ϒ-esters and α-ethers; whereas the α-ethers  were affected during sulfite pulping. The existence of covalent bonds between lignin and wood polysaccharides might be the rate-limiting factor in sulfite pulping.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dada la persistencia de las diferencias en ingresos laborales por regiones en Colombia, el presente artículo propone cuantificar la magnitud de este diferencial que es atribuida a la diferencia en estructuras de mercado laboral, entendiendo esta última como la diferencia en los retornos a las características de la fuerza laboral. Para ello se propone el uso de un método de descomposición del tipo Oaxaca- Blinder y se compara a Bogotá –la ciudad con mayores ingresos laborales- con otras ciudades principales. Los resultados obtenidos al conducir el ejercicio de descomposición muestran que las diferencias en estructura están a favor de Bogotá y que estas explican más de la mitad de la diferencia total, indicando que si se quieren reducir las disparidades de ingresos laborales entre ciudades no es suficiente con calificar la fuerza laboral y que es necesario indagar por las causas que hacen que los retornos a las características difieran entre ciudades.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study extends previous findings by examining whether defense styles, selfobject needs, attachment styles relate to Neediness and Self-Criticism, as maladaptive personality dimensions focused, respectively, on relatedness and self-definition in an Iranian sample. Three hundred and fifty two participants completed a socio-demographic questionnaire as well as the Persian forms of the Depressive Experiences Questionnaire, Experience of Close Relationships-Revised, Defense Style Questionnaire, Beck Depression Inventory–II and Selfobject Needs Inventory. Two Multiple Linear Regression Analyses, entering Self-criticism and Neediness as criterion variables, were computed. According to the results high Attachment anxiety, high Immature defenses, high depressive symptoms, and high need for idealization were related to self-criticism, and explained 47% of its variance. In addition, high attachment anxiety, low mature defenses, high neurotic defenses, high avoidance of mirroring, and low avoidance of idealization/twinship were related to neediness, and explained 40% of its variance. A Principal Component Analysis was performed, entering all the studied variables. Three factors emerged; one describing a maladaptive form of psychological functioning and two describing more mature modes of psychological functioning. The results are discussed in their implications for the understanding of neediness and self-criticism as maladaptive personality dimensions focused, respectively, on relatedness and self-definition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Three-Dimensional Single-Bin-Size Bin Packing Problem is one of the most studied problem in the Cutting & Packing category. From a strictly mathematical point of view, it consists of packing a finite set of strongly heterogeneous “small” boxes, called items, into a finite set of identical “large” rectangles, called bins, minimizing the unused volume and requiring that the items are packed without overlapping. The great interest is mainly due to the number of real-world applications in which it arises, such as pallet and container loading, cutting objects out of a piece of material and packaging design. Depending on these real-world applications, more objective functions and more practical constraints could be needed. After a brief discussion about the real-world applications of the problem and a exhaustive literature review, the design of a two-stage algorithm to solve the aforementioned problem is presented. The algorithm must be able to provide the spatial coordinates of the placed boxes vertices and also the optimal boxes input sequence, while guaranteeing geometric, stability, fragility constraints and a reduced computational time. Due to NP-hard complexity of this type of combinatorial problems, a fusion of metaheuristic and machine learning techniques is adopted. In particular, a hybrid genetic algorithm coupled with a feedforward neural network is used. In the first stage, a rich dataset is created starting from a set of real input instances provided by an industrial company and the feedforward neural network is trained on it. After its training, given a new input instance, the hybrid genetic algorithm is able to run using the neural network output as input parameter vector, providing as output the optimal solution. The effectiveness of the proposed works is confirmed via several experimental tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last century, mathematical optimization has become a prominent tool for decision making. Its systematic application in practical fields such as economics, logistics or defense led to the development of algorithmic methods with ever increasing efficiency. Indeed, for a variety of real-world problems, finding an optimal decision among a set of (implicitly or explicitly) predefined alternatives has become conceivable in reasonable time. In the last decades, however, the research community raised more and more attention to the role of uncertainty in the optimization process. In particular, one may question the notion of optimality, and even feasibility, when studying decision problems with unknown or imprecise input parameters. This concern is even more critical in a world becoming more and more complex —by which we intend, interconnected —where each individual variation inside a system inevitably causes other variations in the system itself. In this dissertation, we study a class of optimization problems which suffer from imprecise input data and feature a two-stage decision process, i.e., where decisions are made in a sequential order —called stages —and where unknown parameters are revealed throughout the stages. The applications of such problems are plethora in practical fields such as, e.g., facility location problems with uncertain demands, transportation problems with uncertain costs or scheduling under uncertain processing times. The uncertainty is dealt with a robust optimization (RO) viewpoint (also known as "worst-case perspective") and we present original contributions to the RO literature on both the theoretical and practical side.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Latency can be defined as the sum of the arrival times at the customers. Minimum latency problems are specially relevant in applications related to humanitarian logistics. This thesis presents algorithms for solving a family of vehicle routing problems with minimum latency. First the latency location routing problem (LLRP) is considered. It consists of determining the subset of depots to be opened, and the routes that a set of homogeneous capacitated vehicles must perform in order to visit a set of customers such that the sum of the demands of the customers assigned to each vehicle does not exceed the capacity of the vehicle. For solving this problem three metaheuristic algorithms combining simulated annealing and variable neighborhood descent, and an iterated local search (ILS) algorithm, are proposed. Furthermore, the multi-depot cumulative capacitated vehicle routing problem (MDCCVRP) and the multi-depot k-traveling repairman problem (MDk-TRP) are solved with the proposed ILS algorithm. The MDCCVRP is a special case of the LLRP in which all the depots can be opened, and the MDk-TRP is a special case of the MDCCVRP in which the capacity constraints are relaxed. Finally, a LLRP with stochastic travel times is studied. A two-stage stochastic programming model and a variable neighborhood search algorithm are proposed for solving the problem. Furthermore a sampling method is developed for tackling instances with an infinite number of scenarios. Extensive computational experiments show that the proposed methods are effective for solving the problems under study.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The effect of application with different nozzle types and volume rates on spray deposition in the V3 stage of two soybean cultivars was evaluated. The experiments were conducted in the Facultad de Ciencias Agronomicas of the UNESP-Botucatu/SP. The nozzles evaluated were an air induced flat fan nozzle (Al 11015 at 150 L ha(-1), Al 11002 at 200 and 250 L ha(-1)), a twin flat fan nozzle (TJ 60 11002 at 150, 200 and 250 L ha(-1)), and a cone nozzle (TX 6 at 150 L ha(-1), TX 8 at 150 L ha(-1) and TX 10 at 250 L ha(-1)). To evaluate spray deposition on the plants, a tracer (Brilliant Blue FD&C-1) was added. The experimental design was random blocks with four replications. Deposition on plants was determined by absorbancy reading in 630 nm wavelength. The data were adjusted to a calibration curve and transformed into deposited spray volume in mL. The relationship deposition per unit of dry matter was adjusted to a regression curve (Gompertz model). In cultivar CD 208, the highest deposit was for the larger volumes and for the treatment TX 8 200 L ha(-1). The most uniform treatments were all the nozzles with the volume 150 L ha(-1) and the TJ60 nozzle for 200 1, ha(-1). In cultivar CD 216, the greatest spray depositions were achieved with the treatments Al at 200 and 250 L ha(-1) and TJ 60 at 250 L ha(-1), and the most uniform treatments were the TX 6 and TJ60 nozzles for the volume150 L ha(-1).

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Background and purpose: Breast cancer continues to be a health problem for women, representing 28 percent of all female cancers and remaining one of the leading causes of death for women. Breast cancer incidence rates become substantial before the age of 50. After menopause, breast cancer incidence rates continue to increase with age creating a long-lasting source of concern (Harris et al., 1992). Mammography, a technique for the detection of breast tumors in their nonpalpable stage when they are most curable, has taken on considerable importance as a public health measure. The lifetime risk of breast cancer is approximately 1 in 9 and occurs over many decades. Recommendations are that screening be periodic in order to detect cancer at early stages. These recommendations, largely, are not followed. Not only are most women not getting regular mammograms, but this circumstance is particularly the case among older women where regular mammography has been proven to reduce mortality by approximately 30 percent. The purpose of this project was to increase our understanding of factors that are associated with stage of readiness to obtain subsequent mammograms. A secondary purpose of this research was to suggest further conceptual considerations toward the extension of the Transtheoretical Model (TTM) of behavior change to repeat screening mammography. ^ Methods. A sample (n = 1,222) of women 50 years and older in a large multi-specialty clinic in Houston, Texas was surveyed by mail questionnaire regarding their previous screening experience and stage of readiness to obtain repeat screening. A computerized database, maintained on all women who undergo mammography at the clinic, was used to identify women who are eligible for the project. The major statistical technique employed to select the significant variables and to examine the man and interaction effects of independent variables on dependent variables was polychotomous stepwise, logistic regression. A prediction model for each stage of readiness definition was estimated. The expected probabilities for stage of readiness were calculated to assess the magnitude and direction of significant predictors. ^ Results. Analysis showed that both ways of defining stage of readiness for obtaining a screening mammogram were associated with specific constructs, including decisional balance and processes of the change. ^ Conclusions. The results of the present study demonstrate that the TTM appears to translate to repeat mammography screening. Findings in the current study also support finding of previous studies that suggest that stage of readiness is associated with respondent decisional balance and the processes of change. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The pivotal role of spleen CD4(+) T cells in the development of both malaria pathogenesis and protective immunity makes necessary a profound comprehension of the mechanisms involved in their activation and regulation during Plasmodium infection. Herein, we examined in detail the behaviour of non-conventional and conventional splenic CD4(+) T cells during P. chabaudi malaria. We took advantage of the fact that a great proportion of CD4(+) T cells generated in CD1d(-/-) mice are I-A(b)-restricted (conventional cells), while their counterparts in I-Ab(-/-) mice are restricted by CD1d and other class IB major histocompatibility complex (MHC) molecules (non-conventional cells). We found that conventional CD4(+) T cells are the main protagonists of the immune response to infection, which develops in two consecutive phases concomitant with acute and chronic parasitaemias. The early phase of the conventional CD4(+) T cell response is intense and short lasting, rapidly providing large amounts of proinflammatory cytokines and helping follicular and marginal zone B cells to secrete polyclonal immunoglobulin. Both TNF-alpha and IFN-gamma production depend mostly on conventional CD4(+) T cells. IFN-gamma is produced simultaneously by non-conventional and conventional CD4(+) T cells. The early phase of the response finishes after a week of infection, with the elimination of a large proportion of CD4(+) T cells, which then gives opportunity to the development of acquired immunity. Unexpectedly, the major contribution of CD1d-restricted CD4(+) T cells occurs at the beginning of the second phase of the response, but not earlier, helping both IFN-gamma and parasite-specific antibody production. We concluded that conventional CD4(+) T cells have a central role from the onset of P. chabaudi malaria, acting in parallel with non-conventional CD4(+) T cells as a link between innate and acquired immunity. This study contributes to the understanding of malaria immunology and opens a perspective for future studies designed to decipher the molecular mechanisms behind immune responses to Plasmodium infection.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE: We report the long-term results of a randomized clinical trial comparing induction therapy with once per week for 4 weeks single-agent rituximab alone versus induction followed by 4 cycles of maintenance therapy every 2 months in patients with follicular lymphoma. PATIENTS AND METHODS: Patients (prior chemotherapy 138; chemotherapy-naive 64) received single-agent rituximab and if nonprogressive, were randomly assigned to no further treatment (observation) or four additional doses of rituximab given at 2-month intervals (prolonged exposure). RESULTS: At a median follow-up of 9.5 years and with all living patients having been observed for at least 5 years, the median event-free survival (EFS) was 13 months for the observation and 24 months for the prolonged exposure arm (P < .001). In the observation arm, patients without events at 8 years were 5%, while in the prolonged exposure arm they were 27%. Of previously untreated patients receiving prolonged treatment after responding to rituximab induction, at 8 years 45% were still without event. The only favorable prognostic factor for EFS in a multivariate Cox regression was the prolonged rituximab schedule (hazard ratio, 0.59; 95% CI, 0.39 to 0.88; P = .009), whereas being chemotherapy naive, presenting with stage lower than IV, and showing a VV phenotype at position 158 of the Fc-gamma RIIIA receptor were not of independent prognostic value. No long-term toxicity potentially due to rituximab was observed. CONCLUSION: An important proportion of patients experienced long-term remission after prolonged exposure to rituximab, particularly if they had no prior treatment and responded to rituximab induction.