877 resultados para Analysis tools
Knowledge Exchange study: How Research Tools are of Value to Research: Use Cases and Recommendations
Resumo:
Research tools that are freely available and accessible via the Internet cover an emergent field in the worldwide research infrastructure. Clearly, research tools have increasing value for researchers in their research activities. Knowledge Exchange recently commissioned a project to explore use case studies to show research tools’ potential and relevance for the present research landscape. Makers of successful research tools have been asked questions such as: How are these research tools developed? What are their possibilities? How many researchers use them? What does this new phenomenon mean for the research infrastructure? Additional to the Use Cases, the authors offer observations and recommendations to contribute to effective development of a research infrastructure that can optimally benefit from research tools. the Use Cases are: •Averroes Goes Digital: Transformation, Translation, Transmission and Edition •BRIDGE: Tools for Media Studies Researchers •Multiple Researchers, Single Platform: A Virtual Tool for the 21st Century •The Fabric of Life •Games with A Purpose: How Games Are Turning Image Tagging into Child’s Play •Elmer: Modelling a Future •Molecular Modelling With SOMA2 •An Online Renaissance for Music: Making Early Modern Music Readable •Radio Recordings for Research: How A Million Hours of Danish Broadcasts Were Made Accessible •Salt Rot: A Central Space for Essential Research •Cosmos: Opening Up Social Media for Social Science A brief analysis by the authors can be found: •Some Observations Based on the Case Studies of Research Tools
Resumo:
[EN]Based on the theoretical tools of Complex Networks, this work provides a basic descriptive study of a synonyms dictionary, the Spanish Open Thesaurus represented as a graph. We study the main structural measures of the network compared with those of a random graph. Numerical results show that Open-Thesaurus is a graph whose topological properties approximate a scale-free network, but seems not to present the small-world property because of its sparse structure. We also found that the words of highest betweenness centrality are terms that suggest the vocabulary of psychoanalysis: placer (pleasure), ayudante (in the sense of assistant or worker), and regular (to regulate).
Resumo:
The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.
It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.
The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.
Resumo:
Vortex rings constitute the main structure in the wakes of a wide class of swimming and flying animals, as well as in cardiac flows and in the jets generated by some moss and fungi. However, there is a physical limit, determined by an energy maximization principle called the Kelvin-Benjamin principle, to the size that axisymmetric vortex rings can achieve. The existence of this limit is known to lead to the separation of a growing vortex ring from the shear layer feeding it, a process known as `vortex pinch-off', and characterized by the dimensionless vortex formation number. The goal of this thesis is to improve our understanding of vortex pinch-off as it relates to biological propulsion, and to provide future researchers with tools to assist in identifying and predicting pinch-off in biological flows.
To this end, we introduce a method for identifying pinch-off in starting jets using the Lagrangian coherent structures in the flow, and apply this criterion to an experimentally generated starting jet. Since most naturally occurring vortex rings are not circular, we extend the definition of the vortex formation number to include non-axisymmetric vortex rings, and find that the formation number for moderately non-axisymmetric vortices is similar to that of circular vortex rings. This suggests that naturally occurring vortex rings may be modeled as axisymmetric vortex rings. Therefore, we consider the perturbation response of the Norbury family of axisymmetric vortex rings. This family is chosen to model vortex rings of increasing thickness and circulation, and their response to prolate shape perturbations is simulated using contour dynamics. Finally, the response of more realistic models for vortex rings, constructed from experimental data using nested contours, to perturbations which resemble those encountered by forming vortices more closely, is simulated using contour dynamics. In both families of models, a change in response analogous to pinch-off is found as members of the family with progressively thicker cores are considered. We posit that this analogy may be exploited to understand and predict pinch-off in complex biological flows, where current methods are not applicable in practice, and criteria based on the properties of vortex rings alone are necessary.
Resumo:
Escherichia coli is one of the best studied living organisms and a model system for many biophysical investigations. Despite countless discoveries of the details of its physiology, we still lack a holistic understanding of how these bacteria react to changes in their environment. One of the most important examples is their response to osmotic shock. One of the mechanistic elements protecting cell integrity upon exposure to sudden changes of osmolarity is the presence of mechanosensitive channels in the cell membrane. These channels are believed to act as tension release valves protecting the inner membrane from rupturing. This thesis presents an experimental study of various aspects of mechanosensation in bacteria. We examine cell survival after osmotic shock and how the number of MscL (Mechanosensitive channel of Large conductance) channels expressed in a cell influences its physiology. We developed an assay that allows real-time monitoring of the rate of the osmotic challenge and direct observation of cell morphology during and after the exposure to osmolarity change. The work described in this thesis introduces tools that can be used to quantitatively determine at the single-cell level the number of expressed proteins (in this case MscL channels) as a function of, e.g., growth conditions. The improvement in our quantitative description of mechanosensation in bacteria allows us to address many, so far unsolved, problems, like the minimal number of channels needed for survival, and can begin to paint a clearer picture of why there are so many distinct types of mechanosensitive channels.
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe.
Initial phase of LIGO started in 2002, and since then data was collected during six science runs. Instrument sensitivity was improving from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010.
In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted till 2014.
This thesis describes results of commissioning work done at LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers.
The first part of this thesis is devoted to the description of methods for bringing interferometer to the linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details.
Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in real time. Sensitivity analysis was done to understand and eliminate noise sources of the instrument.
Coupling of noise sources to gravitational wave channel can be reduced if robust feedforward and optimal feedback control loops are implemented. The last part of this thesis describes static and adaptive feedforward noise cancellation techniques applied to Advanced LIGO interferometers and tested at the 40m prototype. Applications of optimal time domain feedback control techniques and estimators to aLIGO control loops are also discussed.
Commissioning work is still ongoing at the sites. First science run of advanced LIGO is planned for September 2015 and will last for 3-4 months. This run will be followed by a set of small instrument upgrades that will be installed on a time scale of few months. Second science run will start in spring 2016 and last for about 6 months. Since current sensitivity of advanced LIGO is already more than factor of 3 higher compared to initial detectors and keeps improving on a monthly basis, upcoming science runs have a good chance for the first direct detection of gravitational waves.
Resumo:
We explore the use of the Radon-Wigner transform, which is associated with the fractional Fourier transform of the pupil function, for determining the point spread function (PSF) of an incoherant defocused optical system. Then we introduce these phase-space tools to analyse the wavefront coding imaging system. It is shown that the shape of the PSF for such a system is highly invarient to the defocous-related aberrations except for a lateral shift. The optical transfer function of this system is also investigated briefly from a new understanding of ambiguity function.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.
Resumo:
In this paper, we propose a novel method for measuring the coma aberrations of lithographic projection optics based on relative image displacements at multiple illumination settings. The measurement accuracy of coma can be improved because the phase-shifting gratings are more sensitive to the aberrations than the binary gratings used in the TAMIS technique, and the impact of distortion on displacements of aerial image can be eliminated when the relative image displacements are measured. The PROLITH simulation results show that, the measurement accuracy of coma increases by more than 25% under conventional illumination, and the measurement accuracy of primary coma increases by more than 20% under annular illumination, compared with the TAMIS technique. (c) 2007 Optical Society of America.
Resumo:
DNA techniques are increasingly used as diagnostic tools in many fields and venues. In particular, a relatively new application is its use as a check for proper advertisement in markets and on restaurant menus. The identification of fish from markets and restaurants is a growing problem because economic practices often render it cost-effective to substitute one species for another. DNA sequences that are diagnostic for many commercially important fishes are now documented on public databases, such as the National Center for Biotechnology Information’s (NCBI) GenBank.1 It is now possible for most genetics laboratories to identify the species from which a tissue sample was taken without sequencing all the possible taxa it might represent.
Resumo:
World Conference on Psychology and Sociology 2012
Resumo:
The mucus surface layer of corals plays a number of integral roles in their overall health and fitness. This mucopolysaccharide coating serves as vehicle to capture food, a protective barrier against physical invasions and trauma, and serves as a medium to host a community of microorganisms distinct from the surrounding seawater. In healthy corals the associated microbial communities are known to provide antibiotics that contribute to the coral’s innate immunity and function metabolic activities such as biogeochemical cycling. Culture-dependent (Ducklow and Mitchell, 1979; Ritchie, 2006) and culture-independent methods (Rohwer, et al., 2001; Rohwer et al., 2002; Sekar et al., 2006; Hansson et al., 2009; Kellogg et al., 2009) have shown that coral mucus-associated microbial communities can change with changes in the environment and health condition of the coral. These changes may suggest that changes in the microbial associates not only reflect health status but also may assist corals in acclimating to changing environmental conditions. With the increasing availability of molecular biology tools, culture-independent methods are being used more frequently for evaluating the health of the animal host. Although culture-independent methods are able to provide more in-depth insights into the constituents of the coral surface mucus layer’s microbial community, their reliability and reproducibility rely on the initial sample collection maintaining sample integrity. In general, a sample of mucus is collected from a coral colony, either by sterile syringe or swab method (Woodley, et al., 2008), and immediately placed in a cryovial. In the case of a syringe sample, the mucus is decanted into the cryovial and the sealed tube is immediately flash-frozen in a liquid nitrogen vapor shipper (a.k.a., dry shipper). Swabs with mucus are placed in a cryovial, and the end of the swab is broken off before sealing and placing the vial in the dry shipper. The samples are then sent to a laboratory for analysis. After the initial collection and preservation of the sample, the duration of the sample voyage to a recipient laboratory is often another critical part of the sampling process, as unanticipated delays may exceed the length of time a dry shipper can remain cold, or mishandling of the shipper can cause it to exhaust prematurely. In remote areas, service by international shipping companies may be non-existent, which requires the use of an alternative preservation medium. Other methods for preserving environmental samples for microbial DNA analysis include drying on various matrices (DNA cards, swabs), or placing samples in liquid preservatives (e.g., chloroform/phenol/isoamyl alcohol, TRIzol reagent, ethanol). These methodologies eliminate the need for cold storage, however, they add expense and permitting requirements for hazardous liquid components, and the retrieval of intact microbial DNA often can be inconsistent (Dawson, et al., 1998; Rissanen et al., 2010). A method to preserve coral mucus samples without cold storage or use of hazardous solvents, while maintaining microbial DNA integrity, would be an invaluable tool for coral biologists, especially those in remote areas. Saline-saturated dimethylsulfoxide-ethylenediaminetetraacetic acid (20% DMSO-0.25M EDTA, pH 8.0), or SSDE, is a solution that has been reported to be a means of storing tissue of marine invertebrates at ambient temperatures without significant loss of nucleic acid integrity (Dawson et al., 1998, Concepcion et al., 2007). While this methodology would be a facile and inexpensive way to transport coral tissue samples, it is unclear whether the coral microbiota DNA would be adversely affected by this storage medium either by degradation of the DNA, or a bias in the DNA recovered during the extraction process created by variations in extraction efficiencies among the various community members. Tests to determine the efficacy of SSDE as an ambient temperature storage medium for coral mucus samples are presented here.
Resumo:
Models that help predict fecal coliform bacteria (FCB) levels in environmental waters can be important tools for resource managers. In this study, we used animal activity along with antibiotic resistance analysis (ARA), land cover, and other variables to build models that predict bacteria levels in coastal ponds that discharge into an estuary. Photographic wildlife monitoring was used to estimate terrestrial and aquatic wildlife activity prior to sampling. Increased duck activity was an important predictor of increased FCB in coastal ponds. Terrestrial animals like deer and raccoon, although abundant, were not significant in our model. Various land cover types, rainfall, tide, solar irradiation, air temperature, and season parameters, in combination with duck activity, were significant predictors of increased FCB. It appears that tidal ponds allow for settling of bacteria under most conditions. We propose that these models can be used to test different development styles and wildlife management techniques to reduce bacterial loading into downstream shellfish harvesting and contact recreation areas.
Resumo:
CLADP is an engineering software program developed at Cambridge University for the interactive computer aided design of feedback control systems. CLADP contains a wide range of tools for the analysis of complex systems, and the assessment of their performance when feedback control is applied, thus enabling control systems to be designed to meet difficult performance objectives. The range of tools within CLADP include the latest techniques in the field whose central theme is the extension of classical frequency domain concepts (well known and well proven for single loop systems) to multivariable or multiloop systems, and by making extensive use of graphical presentation information is provided in a readily understood form.