934 resultados para Experimental Methods.
Resumo:
Silicon-on-insulator (SOI) technologies have been developed for radiation-hardened military and space applications. The use of SOI has been motivated by the full dielectric isolation of individual transistors, which prevents latch-up. The sensitive region for charge collection in SOI technologies is much smaller than for bulk-silicon devices potentially making SOI devices much harder to single event upset (SEU). In this study, 64 kB SOI SRAMs were exposed to different heavy ions, such as Cu, Br, I, Kr. Experimental results show that the heavy ion SEU threshold linear energy transfer (LET) in the 64 kB SOI SRAMs is about 71.8 MeV cm(2)/mg. Accorded to the experimental results, the single event upset rate (SEUR) in space orbits were calculated and they are at the order of 10(-13) upset/(day bit).
Resumo:
Concise methods are proposed to study proton radioactivity. The spectroscopic factor is obtained from relativistic mean field (RMF) theory combined with the BCS method (RMF+BCS). The assault frequency is estimated by a quantum mechanical method considering the structure of the parent nucleus. The penetrability is calculated by the WKB approximation. No additional parameters are introduced. The extracted experimental spectroscopic factors are compared with those from the calculations by the RMF+BCS, and the agreement is good, implying that the present methods work quite well for proton radioactivity. Predictions are provided for some most possible proton emissions, which may be useful for future experiments.
Resumo:
Aims. We determine branching fractions, cross sections and thermal rate constants for the dissociative recombination of CD3CDOD+ and CH3CH2OH2+ at the low relative kinetic energies encountered in the interstellar medium. Methods. The experiments were carried out by merging an ion and electron beam at the heavy ion storage ring CRYRING, Stockholm, Sweden. Results. Break-up of the CCO structure into three heavy fragments is not found for either of the ions. Instead the CCO structure is retained in 23 +/- 3% of the DR reactions of CD3CDOD+ and 7 +/- 3% in the DR of CH3CH2OH2+, whereas rupture into two heavy fragments occurs in 77 +/- 3% and 93 +/- 3% of the DR events of the respective ions. The measured cross sections were fitted between 1-200 meV yielding the following thermal rate constants and cross-section dependencies on the relative kinetic energy: sigma(E-cm[eV]) = 1.7 +/- 0.3 x 10(-15)(Ecm[eV])(-1.23 +/- 0.02) cm(2) and k(T) = 1.9 +/- 0.4 x 10(-6)(T/300)-0.73 +/- 0.02 cm(3) s(-1) for CH3CH2OH2+ as well as k(T) = 1.1 +/- 0.4 x 10(-6)(T/300)(-0.74 +/- 0.05) cm(3) s(-1) and s(Ecm[eV]) = 9.2 +/- 4 x 10(-16)(Ecm[eV])-1.24 +/- 0.05 cm(2) for CD3CDOD+
Resumo:
Chitosan has shown its potential as a non-viral gene carrier and an adsorption enhancer for subsequent drug delivery to cells. These results showed that chitosan acted as a membrane perturbant. However, there is currently a lack of direct experimental evidence of this membrane perturbance effect, especially for chitosans with low molecular weight (LMW). In this report, the interaction between a lipid (didodecyl dimethylammonium bromide; DDAB) bilayer and chitosan with molecular weight (MW) of 4200 Da was studied with cyclic voltammetry (CV), electrochemical impedance spectroscopy and surface plasmon resonance (SPR). A lipid bilayer was formed by-fusion of oppositely charged lipid vesicles on a mercaptopropionic acid (MPA)-modified gold surface to mimic a cell membrane. The results showed that the LMW chitosan could disrupt the lipid bilayer, and the effect seemed,to be in a concentration-dependent manner.
Resumo:
Shrimps Litopenaeus vannamei with initial body weight of 2.108 +/- 0.036 g were sampled for specific growth rates (SGR) and body color measurements for 50 days under different light sources (incandescent lamp, IL; cool-white fluorescent lamp, FL; metal halide lamp, MHL; and control without lamp) and different illumination methods (illumination only in day, IOD, and illumination day and night, IDN). Body color of L. vannamei was measured according to the free astaxanthin concentration (FAC) of shrimp. The SGR, food intake (FI), feed conversion efficiency (FCE) and FAC of shrimps showed significant differences among the experimental treatment groups (P < 0.05). Maximum and minimum SGR occurred under IOD by MHL and IDN by FL, respectively (difference 56.34%). The FI of shrimp for the control group did not rank lowest among treatments, confirming that shrimp primarily use scent, not vision, to search for food. FI and FCE of shrimps were both the lowest among treatment groups under IDN by FL and growth was slow, thus FL is not a preferred light source for shrimp culture. Under IOD by MHL, shrimps had the highest FCE and the third highest FI among treatment groups ensuring rapid growth. FAC of shrimp were about 3.31 +/- 0.20 mg/kg. When under IOD by MHL and IDN by FL, FAC was significantly higher than the other treatments (P < 0.05). To summarize, when illuminated by MHL, L. vannamei had not only vivid body color due to high astaxanthin concentration but also rapid growth. Therefore, MHL is an appropriate indoor light source for shrimp super-intensive culture. SGR of shrimp was in significantly negative correlation to FAC of shrimp (P < 0.05). Thus, when FAC increased, SGR did not always follow, suggesting that the purpose of astaxanthin accumulation was not for growth promotion but for protection against intense light. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This paper reviews the fingerprint classification literature looking at the problem from a double perspective. We first deal with feature extraction methods, including the different models considered for singular point detection and for orientation map extraction. Then, we focus on the different learning models considered to build the classifiers used to label new fingerprints. Taxonomies and classifications for the feature extraction, singular point detection, orientation extraction and learning methods are presented. A critical view of the existing literature have led us to present a discussion on the existing methods and their drawbacks such as difficulty in their reimplementation, lack of details or major differences in their evaluations procedures. On this account, an experimental analysis of the most relevant methods is carried out in the second part of this paper, and a new method based on their combination is presented.
Resumo:
In the first part of this paper we reviewed the fingerprint classification literature from two different perspectives: the feature extraction and the classifier learning. Aiming at answering the question of which among the reviewed methods would perform better in a real implementation we end up in a discussion which showed the difficulty in answering this question. No previous comparison exists in the literature and comparisons among papers are done with different experimental frameworks. Moreover, the difficulty in implementing published methods was stated due to the lack of details in their description, parameters and the fact that no source code is shared. For this reason, in this paper we will go through a deep experimental study following the proposed double perspective. In order to do so, we have carefully implemented some of the most relevant feature extraction methods according to the explanations found in the corresponding papers and we have tested their performance with different classifiers, including those specific proposals made by the authors. Our aim is to develop an objective experimental study in a common framework, which has not been done before and which can serve as a baseline for future works on the topic. This way, we will not only test their quality, but their reusability by other researchers and will be able to indicate which proposals could be considered for future developments. Furthermore, we will show that combining different feature extraction models in an ensemble can lead to a superior performance, significantly increasing the results obtained by individual models.
Experimental quantification and modelling of attrition of infant formulae during pneumatic conveying
Resumo:
Infant formula is often produced as an agglomerated powder using a spray drying process. Pneumatic conveying is commonly used for transporting this product within a manufacturing plant. The transient mechanical loads imposed by this process cause some of the agglomerates to disintegrate, which has implications for key quality characteristics of the formula including bulk density and wettability. This thesis used both experimental and modelling approaches to investigate this breakage during conveying. One set of conveying trials had the objective of establishing relationships between the geometry and operating conditions of the conveying system and the resulting changes in bulk properties of the infant formula upon conveying. A modular stainless steel pneumatic conveying rig was constructed for these trials. The mode of conveying and air velocity had a statistically-significant effect on bulk density at a 95% level, while mode of conveying was the only factor which significantly influenced D[4,3] or wettability. A separate set of conveying experiments investigated the effect of infant formula composition, rather than the pneumatic conveying parameters, and also assessed the relationships between the mechanical responses of individual agglomerates of four infant formulae and their compositions. The bulk densities before conveying, and the forces and strains at failure of individual agglomerates, were related to the protein content. The force at failure and stiffness of individual agglomerates were strongly correlated, and generally increased with increasing protein to fat ratio while the strain at failure decreased. Two models of breakage were developed at different scales; the first was a detailed discrete element model of a single agglomerate. This was calibrated using a novel approach based on Taguchi methods which was shown to have considerable advantages over basic parameter studies which are widely used. The data obtained using this model compared well to experimental results for quasi-static uniaxial compression of individual agglomerates. The model also gave adequate results for dynamic loading simulations. A probabilistic model of pneumatic conveying was also developed; this was suitable for predicting breakage in large populations of agglomerates and was highly versatile: parts of the model could easily be substituted by the researcher according to their specific requirements.
Resumo:
This thesis investigated the relationship of explicit (self-report), implicit (IAT) and physiological variables to the placebo effect. The thesis consisted of three main parts. The first collected background data and developed models for two constructs (Optimism and Mindfulness) associated with the placebo effect and implicit attitudes, respectively. The second part of the thesis consisted of the development of an explicit measure of treatment expectancies, and the development of two IATs, one for Optimism and the other for Treatment Credibility. The final portion of the thesis was an experimental study (N=111) which tested these new measures in a sample of healthy volunteers. The primary hypothesis of the thesis, that there would be a relationship between the placebo effect and implicit measures, was not supported. Major findings include an effect of semantic priming on placebo response mediated by condition (Deceptive versus Open Placebo), an unexpected negative relationship between Optimism and self-reported Health, and a physiological relationship between pain ratings and GSR data, which was also mediated by Condition in the experiment. A complete record of the code and data for this thesis can be found at https://github.com/richiemorrisroe/Thesis
Resumo:
While genome-wide gene expression data are generated at an increasing rate, the repertoire of approaches for pattern discovery in these data is still limited. Identifying subtle patterns of interest in large amounts of data (tens of thousands of profiles) associated with a certain level of noise remains a challenge. A microarray time series was recently generated to study the transcriptional program of the mouse segmentation clock, a biological oscillator associated with the periodic formation of the segments of the body axis. A method related to Fourier analysis, the Lomb-Scargle periodogram, was used to detect periodic profiles in the dataset, leading to the identification of a novel set of cyclic genes associated with the segmentation clock. Here, we applied to the same microarray time series dataset four distinct mathematical methods to identify significant patterns in gene expression profiles. These methods are called: Phase consistency, Address reduction, Cyclohedron test and Stable persistence, and are based on different conceptual frameworks that are either hypothesis- or data-driven. Some of the methods, unlike Fourier transforms, are not dependent on the assumption of periodicity of the pattern of interest. Remarkably, these methods identified blindly the expression profiles of known cyclic genes as the most significant patterns in the dataset. Many candidate genes predicted by more than one approach appeared to be true positive cyclic genes and will be of particular interest for future research. In addition, these methods predicted novel candidate cyclic genes that were consistent with previous biological knowledge and experimental validation in mouse embryos. Our results demonstrate the utility of these novel pattern detection strategies, notably for detection of periodic profiles, and suggest that combining several distinct mathematical approaches to analyze microarray datasets is a valuable strategy for identifying genes that exhibit novel, interesting transcriptional patterns.
Resumo:
Gemstone Team SHINE (Students Helping to Implement Natural Energy)
Resumo:
With increasing recognition of the roles RNA molecules and RNA/protein complexes play in an unexpected variety of biological processes, understanding of RNA structure-function relationships is of high current importance. To make clean biological interpretations from three-dimensional structures, it is imperative to have high-quality, accurate RNA crystal structures available, and the community has thoroughly embraced that goal. However, due to the many degrees of freedom inherent in RNA structure (especially for the backbone), it is a significant challenge to succeed in building accurate experimental models for RNA structures. This chapter describes the tools and techniques our research group and our collaborators have developed over the years to help RNA structural biologists both evaluate and achieve better accuracy. Expert analysis of large, high-resolution, quality-conscious RNA datasets provides the fundamental information that enables automated methods for robust and efficient error diagnosis in validating RNA structures at all resolutions. The even more crucial goal of correcting the diagnosed outliers has steadily developed toward highly effective, computationally based techniques. Automation enables solving complex issues in large RNA structures, but cannot circumvent the need for thoughtful examination of local details, and so we also provide some guidance for interpreting and acting on the results of current structure validation for RNA.
Resumo:
For optimal solutions in health care, decision makers inevitably must evaluate trade-offs, which call for multi-attribute valuation methods. Researchers have proposed using best-worst scaling (BWS) methods which seek to extract information from respondents by asking them to identify the best and worst items in each choice set. While a companion paper describes the different types of BWS, application and their advantages and downsides, this contribution expounds their relationships with microeconomic theory, which also have implications for statistical inference. This article devotes to the microeconomic foundations of preference measurement, also addressing issues such as scale invariance and scale heterogeneity. Furthermore the paper discusses the basics of preference measurement using rating, ranking and stated choice data in the light of the findings of the preceding section. Moreover the paper gives an introduction to the use of stated choice data and juxtaposes BWS with the microeconomic foundations.
Resumo:
BACKGROUND & AIMS: Few data are available on the potential role of T lymphocytes in experimental acute pancreatitis. The aim of this study was to characterize their role in the inflammatory cascade of acute pancreatitis. METHODS: To type this issue, acute pancreatitis was induced by repeated injections of cerulein in nude mice and in vivo CD4(+) or CD8(+) T cell-depleted mice. The role of T lymphocyte-costimulatory pathways was evaluated using anti-CD40 ligand or anti-B7-1 and -B7-2 monoclonal blocking antibodies. The role of Fas-Fas ligand was explored using Fas ligand-targeted mutant (generalized lymphoproliferative disease) mice. Severity of acute pancreatitis was assessed by serum hydrolase levels and histology. Intrapancreatic interleukin 12, interferon gamma, Fas ligand, and CD40 ligand messenger RNA were detected by reverse-transcription polymerase chain reaction. Intrapancreatic T lymphocytes were identified by immunohistochemistry. RESULTS: In control mice, T cells, most of them CD4(+) T cells, are present in the pancreas and are recruited during acute pancreatitis. In nude mice, histological lesions and serum hydrolase levels are significantly decreased. T-lymphocyte transfer into nude mice partially restores the severity of acute pancreatitis and intrapancreatic interferon gamma, interleukin 12, and Fas ligand gene transcription. The severity of pancreatitis is also reduced by in vivo CD4(+) (but not CD8(+)) T-cell depletion and in Fas ligand-targeted mutant mice. Blocking CD40-CD40 ligand or B7-CD28 costimulatory pathways has no effect on the severity of pancreatitis. CONCLUSIONS: T lymphocytes, particularly CD4(+) T cells, play a pivotal role in the development of tissue injury during acute experimental pancreatitis in mice.
Resumo:
The dynamic process of melting different materials in a cold crucible is being studied experimentally with parallel numerical modelling work. The numerical simulation uses a variety of complementing models: finite volume, integral equation and pseudo-spectral methods combined to achieve the accurate description of the dynamic melting process. Results show the temperature history of the melting process with a comparison of the experimental and computed heat losses in the various parts of the equipment. The free surface visual observations are compared to the numerically predicted surface shapes.