956 resultados para Multinomial logit models with random coefficients (RCL)
Resumo:
The general ordinary quasi-differential expression M of n-th order with complex coefficients and its formal adjoint M + are considered over a regoin (a, b) on the real line, −∞ ≤ a < b ≤ ∞, on which the operator may have a finite number of singular points. By considering M over various subintervals on which singularities occur only at the ends, restrictions of the maximal operator generated by M in L2|w (a, b) which are regularly solvable with respect to the minimal operators T0 (M ) and T0 (M + ). In addition to direct sums of regularly solvable operators defined on the separate subintervals, there are other regularly solvable restrications of the maximal operator which involve linking the various intervals together in interface like style.
Resumo:
Some oscillation criteria for solutions of a general perturbed second order ordinary differential equation with damping (r(t)x′ (t))′ + h(t)f (x)x′ (t) + ψ(t, x) = H(t, x(t), x′ (t)) with alternating coefficients are given. The results obtained improve and extend some existing results in the literature.
Resumo:
* The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be
Resumo:
Unrepeatered 100 Gbit/s per channel wave-divisionmultiplexed dual-polarization-QPSK transmission with random distributed feedback fiber laser-based Raman amplification using fiber Bragg grating is demonstrated. Transmission of 1.4 Tb/s (14 × 100 Gbit/s) was possible in 352.8 km link and 2.2 Tb/s (22 × 100 Gbit/s) was achieved in 327.6 km without employing remote optically pumped amplifier or speciality fibers.
Resumo:
The paper deals with a single server finite queuing system where the customers, who failed to get service, are temporarily blocked in the orbit of inactive customers. This model and its variants have many applications, especially for optimization of the corresponding models with retrials. We analyze the system in non-stationary regime and, using the discrete transformations method study, the busy period length and the number of successful calls made during it. ACM Computing Classification System (1998): G.3, J.7.
Resumo:
eHabitat is a Web Processing Service (WPS) designed to compute the likelihood of finding ecosystems with equal properties. Inputs to the WPS, typically thematic geospatial "layers", can be discovered using standardised catalogues, and the outputs tailored to specific end user needs. Because these layers can range from geophysical data captured through remote sensing to socio-economical indicators, eHabitat is exposed to a broad range of different types and levels of uncertainties. Potentially chained to other services to perform ecological forecasting, for example, eHabitat would be an additional component further propagating uncertainties from a potentially long chain of model services. This integration of complex resources increases the challenges in dealing with uncertainty. For such a system, as envisaged by initiatives such as the "Model Web" from the Group on Earth Observations, to be used for policy or decision making, users must be provided with information on the quality of the outputs since all system components will be subject to uncertainty. UncertWeb will create the Uncertainty-Enabled Model Web by promoting interoperability between data and models with quantified uncertainty, building on existing open, international standards. It is the objective of this paper to illustrate a few key ideas behind UncertWeb using eHabitat to discuss the main types of uncertainties the WPS has to deal with and to present the benefits of the use of the UncertWeb framework.
Resumo:
Researchers conducted investigations to demonstrate the advantages of random distributed feedback fiber laser. Random lasers had advantages, such as simple technology that did not require a precise microcavity and low production cost. The properties of their output radiation were special in comparison to those of conventional lasers and they were characterized by complex features in the spatial, spectral, and time domains. The researchers demonstrated a new type of one-dimensional laser with random distributed feedback based on Rayleigh scattering (RS) that was presented in any transparent glass medium due to natural inhomogeneities of refractive index. The cylindrical fiber waveguide geometry provided transverse confinement, while the cavity was open in the longitudinal direction and did not include any regular point-action reflectors.
Resumo:
Objective: The objective of the study is to explore preferences of gastroenterologists for biosimilar drugs in Crohn’s Disease and reveal trade-offs between the perceived risks and benefits related to biosimilar drugs. Method: Discrete choice experiment was carried out involving 51 Hungarian gastroenterologists in May, 2014. The following attributes were used to describe hypothetical choice sets: 1) type of the treatment (biosimilar/originator) 2) severity of disease 3) availability of continuous medicine supply 4) frequency of the efficacy check-ups. Multinomial logit model was used to differentiate between three attitude types: 1) always opting for the originator 2) willing to consider biosimilar for biological-naïve patients only 3) willing to consider biosimilar treatment for both types of patients. Conditional logit model was used to estimate the probabilities of choosing a given profile. Results: Men, senior consultants, working in IBD center and treating more patients are more likely to willing to consider biosimilar for biological-naïve patients only. Treatment type (originator/biosimilar) was the most important determinant of choice for patients already treated with biologicals, and the availability of continuous medicine supply in the case biological-naïve patients. The probabilities of choosing the biosimilar with all the benefits offered over the originator under current reimbursement conditions are 89% vs 11% for new patients, and 44% vs 56% for patients already treated with biological. Conclusions: Gastroenterologists were willing to trade between perceived risks and benefits of biosimilars. The continuous medical supply would be one of the major benefits of biosimilars. However, benefits offered in the scenarios do not compensate for the change from the originator to the biosimilar treatment of patients already treated with biologicals.
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic.^ This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.^
Resumo:
Widespread damage to roofing materials (such as tiles and shingles) for low-rise buildings, even for weaker hurricanes, has raised concerns regarding design load provisions and construction practices. Currently the building codes used for designing low-rise building roofs are mainly based on testing results from building models which generally do not simulate the architectural features of roofing materials that may significantly influence the wind-induced pressures. Full-scale experimentation was conducted under high winds to investigate the effects of architectural details of high profile roof tiles and asphalt shingles on net pressures that are often responsible for damage to these roofing materials. Effects on the vulnerability of roofing materials were also studied. Different roof models with bare, tiled, and shingled roof decks were tested. Pressures acting on both top and bottom surfaces of the roofing materials were measured to understand their effects on the net uplift loading. The area-averaged peak pressure coefficients obtained from bare, tiled, and shingled roof decks were compared. In addition, a set of wind tunnel tests on a tiled roof deck model were conducted to verify the effects of tiles' cavity internal pressure. Both the full-scale and the wind tunnel test results showed that underside pressure of a roof tile could either aggravate or alleviate wind uplift on the tile based on its orientation on the roof with respect to the wind angle of attack. For shingles, the underside pressure could aggravate wind uplift if the shingle is located near the center of the roof deck. Bare deck modeling to estimate design wind uplift on shingled decks may be acceptable for most locations but not for field locations; it could underestimate the uplift on shingles by 30-60%. In addition, some initial quantification of the effects of roofing materials on wind uplift was performed by studying the wind uplift load ratio for tiled versus bare deck and shingled versus bare deck. Vulnerability curves, with and without considering the effects of tiles' cavity internal pressure, showed significant differences. Aerodynamic load provisions for low-rise buildings' roofs and their vulnerability can thus be more accurately evaluated by considering the effects of the roofing materials.
Resumo:
Engineering analysis in geometric models has been the main if not the only credible/reasonable tool used by engineers and scientists to resolve physical boundaries problems. New high speed computers have facilitated the accuracy and validation of the expected results. In practice, an engineering analysis is composed of two parts; the design of the model and the analysis of the geometry with the boundary conditions and constraints imposed on it. Numerical methods are used to resolve a large number of physical boundary problems independent of the model geometry. The time expended due to the computational process are related to the imposed boundary conditions and the well conformed geometry. Any geometric model that contains gaps or open lines is considered an imperfect geometry model and major commercial solver packages are incapable of handling such inputs. Others packages apply different kinds of methods to resolve this problems like patching or zippering; but the final resolved geometry may be different from the original geometry, and the changes may be unacceptable. The study proposed in this dissertation is based on a new technique to process models with geometrical imperfection without the necessity to repair or change the original geometry. An algorithm is presented that is able to analyze the imperfect geometric model with the imposed boundary conditions using a meshfree method and a distance field approximation to the boundaries. Experiments are proposed to analyze the convergence of the algorithm in imperfect models geometries and will be compared with the same models but with perfect geometries. Plotting results will be presented for further analysis and conclusions of the algorithm convergence
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic. This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.
Resumo:
The sedimentary sections of three cores from the Celtic margin provide high-resolution records of the terrigenous fluxes during the last glacial cycle. A total of 21 14C AMS dates allow us to define age models with a resolution better than 100 yr during critical periods such as Heinrich events 1 and 2. Maximum sedimentary fluxes occurred at the Meriadzek Terrace site during the Last Glacial Maximum (LGM). Detailed X-ray imagery of core MD95-2002 from the Meriadzek Terrace shows no sedimentary structures suggestive of either deposition from high-density turbidity currents or significant erosion. Two paroxysmal terrigenous flux episodes have been identified. The first occurred after the deposition of Heinrich event 2 Canadian ice-rafted debris (IRD) and includes IRD from European sources. We suggest that the second represents an episode of deposition from turbid plumes, which precedes IRD deposition associated with Heinrich event 1. At the end of marine isotopic stage 2 (MIS 2) and the beginning of MIS 1 the highest fluxes are recorded on the Whittard Ridge where they correspond to deposition from turbidity current overflows. Canadian icebergs have rafted debris at the Celtic margin during Heinrich events 1, 2, 4 and 5. The high-resolution records of Heinrich events 1 and 2 show that in both cases the arrival of the Canadian icebergs was preceded by a European ice rafting precursor event, which took place about 1-1.5 kyr before. Two rafting episodes of European IRD also occurred immediately after Heinrich event 2 and just before Heinrich event 1. The terrigenous fluxes recorded in core MD95-2002 during the LGM are the highest reported at hemipelagic sites from the northwestern European margin. The magnitude of the Canadian IRD fluxes at Meriadzek Terrace is similar to those from oceanic sites.
Resumo:
The random walk models with temporal correlation (i.e. memory) are of interest in the study of anomalous diffusion phenomena. The random walk and its generalizations are of prominent place in the characterization of various physical, chemical and biological phenomena. The temporal correlation is an essential feature in anomalous diffusion models. These temporal long-range correlation models can be called non-Markovian models, otherwise, the short-range time correlation counterparts are Markovian ones. Within this context, we reviewed the existing models with temporal correlation, i.e. entire memory, the elephant walk model, or partial memory, alzheimer walk model and walk model with a gaussian memory with profile. It is noticed that these models shows superdiffusion with a Hurst exponent H > 1/2. We study in this work a superdiffusive random walk model with exponentially decaying memory. This seems to be a self-contradictory statement, since it is well known that random walks with exponentially decaying temporal correlations can be approximated arbitrarily well by Markov processes and that central limit theorems prohibit superdiffusion for Markovian walks with finite variance of step sizes. The solution to the apparent paradox is that the model is genuinely non-Markovian, due to a time-dependent decay constant associated with the exponential behavior. In the end, we discuss ideas for future investigations.
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.