887 resultados para Set Design
Resumo:
The climate in the Arctic is changing faster than anywhere else on earth. Poorly understood feedback processes relating to Arctic clouds and aerosol–cloud interactions contribute to a poor understanding of the present changes in the Arctic climate system, and also to a large spread in projections of future climate in the Arctic. The problem is exacerbated by the paucity of research-quality observations in the central Arctic. Improved formulations in climate models require such observations, which can only come from measurements in situ in this difficult-to-reach region with logistically demanding environmental conditions. The Arctic Summer Cloud Ocean Study (ASCOS) was the most extensive central Arctic Ocean expedition with an atmospheric focus during the International Polar Year (IPY) 2007–2008. ASCOS focused on the study of the formation and life cycle of low-level Arctic clouds. ASCOS departed from Longyearbyen on Svalbard on 2 August and returned on 9 September 2008. In transit into and out of the pack ice, four short research stations were undertaken in the Fram Strait: two in open water and two in the marginal ice zone. After traversing the pack ice northward, an ice camp was set up on 12 August at 87°21' N, 01°29' W and remained in operation through 1 September, drifting with the ice. During this time, extensive measurements were taken of atmospheric gas and particle chemistry and physics, mesoscale and boundary-layer meteorology, marine biology and chemistry, and upper ocean physics. ASCOS provides a unique interdisciplinary data set for development and testing of new hypotheses on cloud processes, their interactions with the sea ice and ocean and associated physical, chemical, and biological processes and interactions. For example, the first-ever quantitative observation of bubbles in Arctic leads, combined with the unique discovery of marine organic material, polymer gels with an origin in the ocean, inside cloud droplets suggests the possibility of primary marine organically derived cloud condensation nuclei in Arctic stratocumulus clouds. Direct observations of surface fluxes of aerosols could, however, not explain observed variability in aerosol concentrations, and the balance between local and remote aerosols sources remains open. Lack of cloud condensation nuclei (CCN) was at times a controlling factor in low-level cloud formation, and hence for the impact of clouds on the surface energy budget. ASCOS provided detailed measurements of the surface energy balance from late summer melt into the initial autumn freeze-up, and documented the effects of clouds and storms on the surface energy balance during this transition. In addition to such process-level studies, the unique, independent ASCOS data set can and is being used for validation of satellite retrievals, operational models, and reanalysis data sets.
Resumo:
This PhD project set out to explore the role of emotion during learning in sport, focusing on how actions, emotions and cognitions interact under the influence of constraints. Key outcomes include the development of the theoretical concept - Affective Learning Design, and a new tool for assessing the intensity of emotions during learning - the Sport Learning and Emotions Questionnaire. The findings presented in this thesis provide both theoretical and practical implications discussing why emotion should be considered in the design of learning environments in sport.
Resumo:
Assessing airport service performance requires understanding of a complete set of passenger experiences covering all activities from departures to arrivals. Weight-based indicator models allow passengers to express their priority on certain evaluation criteria (airport domains) and their service attributes over the others. The application of multilevel regression analysis in questionnaire design is expected to overcome limitations of traditional questionnaires, which require application of all indicators with equal weight. The development of a Taxonomy of Passenger Activities (TOPA), which captures all passenger processing and discretionary activities, has provided a novel perspective in understanding passenger experience in various airport domains. Based on further literature reviews on various service attributes at airport passenger terminals, this paper constitutes questionnaire design to employ a weighting method for all activities from the time passengers enter an airport domain at the departure terminal until leaving the arrival terminal (i.e. seven airport domains for departure, four airport domains during transit, and seven airport domains for arrival). The procedure of multilevel regression analysis is aimed not only at identifying the ranking of each evaluation criterion from the most important to the least important but also to explain the relationship between service attributes in each airport domain and overall service performance.
Resumo:
The UDP-glucuronosyltransferases (UGTs) are enzymes of the phase II metabolic system. These enzymes catalyze the transfer of α-D-glucuronic acid from UDP-glucuronic acid to aglycones bearing nucleophilic groups affording exclusively their corresponding β-D-glucuronides to render lipophilic endobiotics and xenobiotics more water soluble. This detoxification pathway aids in the urinary and biliary excretion of lipophilic compounds thus preventing their accumulation to harmful levels. The aim of this study was to investigate the effect of stereochemical and steric features of substrates on the glucuronidation catalyzed by UGTs 2B7 and 2B17. Furthermore, this study relates to the design and synthesis of novel, selective inhibitors that display high affinity for the key enzyme involved in drug glucuronidation, UGT2B7. The starting point for the development of inhibitors was to assess the influence of the stereochemistry of substrates on the UGT-catalyzed glucuronidation reaction. A set of 28 enantiomerically pure alcohols was subjected to glucuronidation assays employing the human UGT isoforms 2B7 and 2B17. Both UGT enzymes displayed high stereoselectivity, favoring the glucuronidation of the (R)-enantiomers over their respective mirror-image compounds. The spatial arrangement of the hydroxy group of the substrate determined the rate of the UGT-catalyzed reaction. However, the affinity of the enantiomeric substrates to the enzymes was not significantly influenced by the spatial orientation of the nucleophilic hydroxy group. Based on these results, a rational approach for the design of inhibitors was developed by addressing the stereochemical features of substrate molecules. Further studies showed that the rate of the enzymatic glucuronidation of substrates was also highly dependent on the steric demand in vicinity of the nucleophilic hydroxy group. These findings provided a rational approach to turn high-affinity substrates into true UGT inhibitors by addressing stereochemical and steric features of substrate molecules. The tricyclic sesquiterpenols longifolol and isolongifolol were identified as high-affinity substrates which displayed high selectivity for the UGT isoform 2B7. These compounds served therefore as lead structures for the design of potent and selective inhibitors for UGT2B7. Selective and potent inhibitors were prepared by synthetically modifying the lead compounds longifolol and isolongifolol taking stereochemical and steric features into account. The best inhibitor of UGT2B7, β-phenyllongifolol, displayed an inhibition constant of 0.91 nM.
Resumo:
This paper considers the one-sample sign test for data obtained from general ranked set sampling when the number of observations for each rank are not necessarily the same, and proposes a weighted sign test because observations with different ranks are not identically distributed. The optimal weight for each observation is distribution free and only depends on its associated rank. It is shown analytically that (1) the weighted version always improves the Pitman efficiency for all distributions; and (2) the optimal design is to select the median from each ranked set.
Resumo:
Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971) considered optimal set size for ranked set sampling (RSS) with fixed operational costs. This framework can be very useful in practice to determine whether RSS is beneficial and to obtain the optimal set size that minimizes the variance of the population estimator for a fixed total cost. In this article, we propose a scheme of general RSS in which more than one observation can be taken from each ranked set. This is shown to be more cost-effective in some cases when the cost of ranking is not so small. We demonstrate using the example in Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971), by taking two or more observations from one set even with the optimal set size from the RSS design can be more beneficial.
Resumo:
This article is motivated by a lung cancer study where a regression model is involved and the response variable is too expensive to measure but the predictor variable can be measured easily with relatively negligible cost. This situation occurs quite often in medical studies, quantitative genetics, and ecological and environmental studies. In this article, by using the idea of ranked-set sampling (RSS), we develop sampling strategies that can reduce cost and increase efficiency of the regression analysis for the above-mentioned situation. The developed method is applied retrospectively to a lung cancer study. In the lung cancer study, the interest is to investigate the association between smoking status and three biomarkers: polyphenol DNA adducts, micronuclei, and sister chromatic exchanges. Optimal sampling schemes with different optimality criteria such as A-, D-, and integrated mean square error (IMSE)-optimality are considered in the application. With set size 10 in RSS, the improvement of the optimal schemes over simple random sampling (SRS) is great. For instance, by using the optimal scheme with IMSE-optimality, the IMSEs of the estimated regression functions for the three biomarkers are reduced to about half of those incurred by using SRS.
Resumo:
The single electron transfer-nitroxide radical coupling (SET-NRC) reaction has been used to produce multiblock polymers with high molecular weights in under 3 min at 50◦C by coupling a difunctional telechelic polystyrene (Br-PSTY-Br)with a dinitroxide. The well known combination of dimethyl sulfoxide as solvent and Me6TREN as ligand facilitated the in situ disproportionation of CuIBr to the highly active nascent Cu0 species. This SET reaction allowed polymeric radicals to be rapidly formed from their corresponding halide end-groups. Trapping of these carbon-centred radicals at close to diffusion controlled rates by dinitroxides resulted in high-molecular-weight multiblock polymers. Our results showed that the disproportionation of CuI was critical in obtaining these ultrafast reactions, and confirmed that activation was primarily through Cu0. We took advantage of the reversibility of the NRC reaction at elevated temperatures to decouple the multiblock back to the original PSTY building block through capping the chain-ends with mono-functional nitroxides. These alkoxyamine end-groups were further exchanged with an alkyne mono-functional nitroxide (TEMPO–≡) and ‘clicked’ by a CuI-catalyzed azide/alkyne cycloaddition (CuAAC) reaction with N3–PSTY–N3 to reform the multiblocks. This final ‘click’ reaction, even after the consecutive decoupling and nitroxide-exchange reactions, still produced high molecular-weight multiblocks efficiently. These SET-NRC reactions would have ideal applications in re-usable plastics and possibly as self-healing materials.
Resumo:
Gaussian processes (GPs) are promising Bayesian methods for classification and regression problems. Design of a GP classifier and making predictions using it is, however, computationally demanding, especially when the training set size is large. Sparse GP classifiers are known to overcome this limitation. In this letter, we propose and study a validation-based method for sparse GP classifier design. The proposed method uses a negative log predictive (NLP) loss measure, which is easy to compute for GP models. We use this measure for both basis vector selection and hyperparameter adaptation. The experimental results on several real-world benchmark data sets show better orcomparable generalization performance over existing methods.
Resumo:
OBJECTIVE Corneal confocal microscopy is a novel diagnostic technique for the detection of nerve damage and repair in a range of peripheral neuropathies, in particular diabetic neuropathy. Normative reference values are required to enable clinical translation and wider use of this technique. We have therefore undertaken a multicenter collaboration to provide worldwide age-adjusted normative values of corneal nerve fiber parameters. RESEARCH DESIGN AND METHODS A total of 1,965 corneal nerve images from 343 healthy volunteers were pooled from six clinical academic centers. All subjects underwent examination with the Heidelberg Retina Tomograph corneal confocal microscope. Images of the central corneal subbasal nerve plexus were acquired by each center using a standard protocol and analyzed by three trained examiners using manual tracing and semiautomated software (CCMetrics). Age trends were established using simple linear regression, and normative corneal nerve fiber density (CNFD), corneal nerve fiber branch density (CNBD), corneal nerve fiber length (CNFL), and corneal nerve fiber tortuosity (CNFT) reference values were calculated using quantile regression analysis. RESULTS There was a significant linear age-dependent decrease in CNFD (-0.164 no./mm(2) per year for men, P < 0.01, and -0.161 no./mm(2) per year for women, P < 0.01). There was no change with age in CNBD (0.192 no./mm(2) per year for men, P = 0.26, and -0.050 no./mm(2) per year for women, P = 0.78). CNFL decreased in men (-0.045 mm/mm(2) per year, P = 0.07) and women (-0.060 mm/mm(2) per year, P = 0.02). CNFT increased with age in men (0.044 per year, P < 0.01) and women (0.046 per year, P < 0.01). Height, weight, and BMI did not influence the 5th percentile normative values for any corneal nerve parameter. CONCLUSIONS This study provides robust worldwide normative reference values for corneal nerve parameters to be used in research and clinical practice in the study of diabetic and other peripheral neuropathies.
Resumo:
This paper describes an algorithm to compute the union, intersection and difference of two polygons using a scan-grid approach. Basically, in this method, the screen is divided into cells and the algorithm is applied to each cell in turn. The output from all the cells is integrated to yield a representation of the output polygon. In most cells, no computation is required and thus the algorithm is a fast one. The algorithm has been implemented for polygons but can be extended to polyhedra as well. The algorithm is shown to take O(N) time in the average case where N is the total number of edges of the two input polygons.
Resumo:
The feasibility of realising a high-order LC filter with a small set of different capacitor values, without sacrificing the frequency response specifications, is indicated. This idea could be conveniently adopted in other filter structures also—for example the FDNR transformed filter realisations.
Resumo:
We report on ongoing research to develop a design theory for classes of information systems that allow for work practices that exhibit a minimal harmful impact on the natural environment. We call such information systems Green IS. In this paper we describe the building blocks of our Green IS design theory, which develops prescriptions for information systems that allow for: (1) belief formation, action formation and outcome measurement relating to (2) environmentally sustainable work practices and environmentally sustainable decisions on (3) a macro or micro level. For each element, we specify structural features, symbolic expressions, user abilities and goals required for the affordances to emerge. We also provide a set of testable propositions derived from our design theory and declare two principles of implementation.
Resumo:
Plywood manufacture includes two fundamental stages. The first is to peel or separate logs into veneer sheets of different thicknesses. The second is to assemble veneer sheets into finished plywood products. At the first stage a decision must be made as to the number of different veneer thicknesses to be peeled and what these thicknesses should be. At the second stage, choices must be made as to how these veneers will be assembled into final products to meet certain constraints while minimizing wood loss. These decisions present a fundamental management dilemma. Costs of peeling, drying, storage, handling, etc. can be reduced by decreasing the number of veneer thicknesses peeled. However, a reduced set of thickness options may make it infeasible to produce the variety of products demanded by the market or increase wood loss by requiring less efficient selection of thicknesses for assembly. In this paper the joint problem of veneer choice and plywood construction is formulated as a nonlinear integer programming problem. A relatively simple optimal solution procedure is developed that exploits special problem structure. This procedure is examined on data from a British Columbia plywood mill. Restricted to the existing set of veneer thicknesses and plywood designs used by that mill, the procedure generated a solution that reduced wood loss by 79 percent, thereby increasing net revenue by 6.86 percent. Additional experiments were performed that examined the consequences of changing the number of veneer thicknesses used. Extensions are discussed that permit the consideration of more than one wood species.
Resumo:
The current-biased single electron transistor (SET) (CBS) is an integral part of almost all hybrid CMOS SET circuits. In this paper, for the first time, the effects of energy quantization on the performance of CBS-based circuits are studied through analytical modeling and Monte Carlo simulations. It is demonstrated that energy quantization has no impact on the gain of the CBS characteristics, although it changes the output voltage levels and oscillation periodicity. The effects of energy quantization are further studied for two circuits: negative differential resistance (NDR) and neuron cell, which use the CBS. A new model for the conductance of NDR characteristics is also formulated that includes the energy quantization term.