180 resultados para Matrices.
Resumo:
We present an approach for the inspection of vertical pole-like infrastructure using a vertical take-off and landing (VTOL) unmanned aerial vehicle and shared autonomy. Inspecting vertical structures, such as light and power distribution poles, is a time consuming, dangerous and expensive task with high operator workload. To address these issues, we propose a VTOL platform that can operate at close-quarters, whilst maintaining a safe stand-off distance and rejecting environmental disturbances. We adopt an Image based Visual Servoing (IBVS) technique using only two line features to stabilise the vehicle with respect to a pole. Visual, inertial and sonar data are used, making the approach suitable for indoor or GPS-denied environments. Results from simulation and outdoor flight experiments demonstrate the system is able to successfully inspect and circumnavigate a pole.
Resumo:
In this paper, we have compiled and reviewed the most recent literature, published from January2010 to December 2012, relating to the human exposure, environmental distribution, behaviour, fate and concentration time trends of polybrominated diphenyl ether (PBDE) and hexabromocyclododecane (HBCD) flame retardants, in order to establish their current trends and priorities for future study. Due to the large volume of literature included, we have provided full detail of the reviewed studies as Electronic Supplementary Information and here summarise the most relevant findings. Decreasing time trends for penta-mix PBDE congeners were seen for soils in northern Europe, sewage sludge in Sweden and the USA, carp from a US river, trout from three of the Great Lakes and in Arctic and UK marine mammals and many birds, but increasing time trends continue in Arctic polar bears and some birds at high trophic levels in northern Europe. This is a result of the time delay inherent in long-range atmospheric transport processes. In general, concentrations of BDE209 (the major component of the deca-mix PBDE product) are continuing to increase. Of major concern is the possible/likely debromination of the large reservoir of BDE209 in soils and sediments worldwide, to yield lower brominated congeners which are both more mobile and more toxic, and we have compiled the most recent evidence for the occurrence of this degradation process. Numerous studies reported here reinforce the importance o f this future concern. Time trends for HBCDs are mixed, with both increases and decreases evident in different matrices and locations and, notably, with increasing occurrence in birds of prey.
Resumo:
Reported homocysteine (HCY) concentrations in human serum show poor concordance amongst laboratories due to endogenous HCY in the matrices used for assay calibrators and QCs. Hence, we have developed a fully validated LC–MS/MS method for measurement of HCY concentrations in human serum samples that addresses this issue by minimising matrix effects. We used small volumes (20 μL) of 2% Bovine Serum Albumin (BSA) as surrogate matrix for making calibrators and QCs with concentrations adjusted for the endogenous HCY concentration in the surrogate matrix using the method of standard additions. To aliquots (20 μL) of human serum samples, calibrators or QCs, were added HCY-d4 (internal standard) and tris-(2-carboxyethyl) phosphine hydrochloride (TCEP) as reducing agent. After protein precipitation, diluted supernatants were injected into the LC–MS/MS. Calibration curves were linear; QCs were accurate (5.6% deviation from nominal), precise (CV% ≤ 9.6%), stable for four freeze–thaw cycles, and when stored at room temperature for 5 h or at −80 °C (27 days). Recoveries from QCs in surrogate matrix or pooled human serum were 91.9 and 95.9%, respectively. There was no matrix effect using 6 different individual serum samples including one that was haemolysed. Our LC–MS/MS method has satisfied all of the validation criteria of the 2012 EMA guideline.
Resumo:
The development of a protein-mediated dual functional affinity adsorption of plasmid DNA is described in this work. The affinity ligand for the plasmid DNA comprises a fusion protein with glutathione-S-transferase (GST) as the fusion partner with a zinc finger protein. The protein ligand is first bound to the adsorbent by affinity interaction between the GST moeity and gluthathione that is covalently immobilized to the base matrix. The plasmid binding is then enabled via the zinc finger protein and a specific nucleotide sequence inserted into the DNA. At lower loadings, the binding of the DNA onto the Fractogel, Sepharose, and Streamline matrices was 0.0078 ± 0.0013, 0.0095 ± 0.0016, and 0.0080 ± 0.0006 mg, respectively, to 50 μL of adsorbent. At a higher DNA challenge, the corresponding amounts were 0.0179 ± 0.0043, 0.0219 ± 0.0035, and 0.0190 ± 0.0041 mg, respectively. The relatively constant amounts bound to the three adsorbents indicated that the large DNA molecule was unable to utilize the available zinc finger sites that were located in the internal pores and binding was largely a surface adsorption phenomenon. Utilization of the zinc finger binding sites was shown to be highest for the Fractogel adsorbent. The adsorbed material was eluted with reduced glutathione, and the eluted efficiency for the DNA was between 23% and 27%. The protein elution profile appeared to match the adsorption profiles with significantly higher recoveries of bound GST-zinc finger protein.
Resumo:
Traditional sensitivity and elasticity analyses of matrix population models have been used to inform management decisions, but they ignore the economic costs of manipulating vital rates. For example, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously. These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency. ©2006 Society for Conservation Biology.
Resumo:
In this paper, we derive a new nonlinear two-sided space-fractional diffusion equation with variable coefficients from the fractional Fick’s law. A semi-implicit difference method (SIDM) for this equation is proposed. The stability and convergence of the SIDM are discussed. For the implementation, we develop a fast accurate iterative method for the SIDM by decomposing the dense coefficient matrix into a combination of Toeplitz-like matrices. This fast iterative method significantly reduces the storage requirement of O(n2)O(n2) and computational cost of O(n3)O(n3) down to n and O(nlogn)O(nlogn), where n is the number of grid points. The method retains the same accuracy as the underlying SIDM solved with Gaussian elimination. Finally, some numerical results are shown to verify the accuracy and efficiency of the new method.
Resumo:
In the finite element modelling of structural frames, external loads such as wind loads, dead loads and imposed loads usually act along the elements rather than at the nodes only. Conventionally, when an element is subjected to these general transverse element loads, they are usually converted to nodal forces acting at the ends of the elements by either lumping or consistent load approaches. In addition, it is especially important for an element subjected to the first- and second-order elastic behaviour, to which the steel structure is critically prone to; in particular the thin-walled steel structures, when the stocky element section may be generally critical to the inelastic behaviour. In this sense, the accurate first- and second-order elastic displacement solutions of element load effect along an element is vitally crucial, but cannot be simulated using neither numerical nodal nor consistent load methods alone, as long as no equilibrium condition is enforced in the finite element formulation, which can inevitably impair the structural safety of the steel structure particularly. It can be therefore regarded as a unique element load method to account for the element load nonlinearly. If accurate displacement solution is targeted for simulating the first- and second-order elastic behaviour on an element on the basis of sophisticated non-linear element stiffness formulation, the numerous prescribed stiffness matrices must indispensably be used for the plethora of specific transverse element loading patterns encountered. In order to circumvent this shortcoming, the present paper proposes a numerical technique to include the transverse element loading in the non-linear stiffness formulation without numerous prescribed stiffness matrices, and which is able to predict structural responses involving the effect of first-order element loads as well as the second-order coupling effect between the transverse load and axial force in the element. This paper shows that the principle of superposition can be applied to derive the generalized stiffness formulation for element load effect, so that the form of the stiffness matrix remains unchanged with respect to the specific loading patterns, but with only the magnitude of the loading (element load coefficients) being needed to be adjusted in the stiffness formulation, and subsequently the non-linear effect on element loadings can be commensurate by updating the magnitude of element load coefficients through the non-linear solution procedures. In principle, the element loading distribution is converted into a single loading magnitude at mid-span in order to provide the initial perturbation for triggering the member bowing effect due to its transverse element loads. This approach in turn sacrifices the effect of element loading distribution except at mid-span. Therefore, it can be foreseen that the load-deflection behaviour may not be as accurate as those at mid-span, but its discrepancy is still trivial as proved. This novelty allows for a very useful generalised stiffness formulation for a single higher-order element with arbitrary transverse loading patterns to be formulated. Moreover, another significance of this paper is placed on shifting the nodal response (system analysis) to both nodal and element response (sophisticated element formulation). For the conventional finite element method, such as the cubic element, all accurate solutions can be only found at node. It means no accurate and reliable structural safety can be ensured within an element, and as a result, it hinders the engineering applications. The results of the paper are verified using analytical stability function studies, as well as with numerical results reported by independent researchers on several simple frames.
Resumo:
There is an increasing need in biology and clinical medicine to robustly and reliably measure tens-to-hundreds of peptides and proteins in clinical and biological samples with high sensitivity, specificity, reproducibility and repeatability. Previously, we demonstrated that LC-MRM-MS with isotope dilution has suitable performance for quantitative measurements of small numbers of relatively abundant proteins in human plasma, and that the resulting assays can be transferred across laboratories while maintaining high reproducibility and quantitative precision. Here we significantly extend that earlier work, demonstrating that 11 laboratories using 14 LC-MS systems can develop, determine analytical figures of merit, and apply highly multiplexed MRM-MS assays targeting 125 peptides derived from 27 cancer-relevant proteins and 7 control proteins to precisely and reproducibly measure the analytes in human plasma. To ensure consistent generation of high quality data we incorporated a system suitability protocol (SSP) into our experimental design. The SSP enabled real-time monitoring of LC-MRM-MS performance during assay development and implementation, facilitating early detection and correction of chromatographic and instrumental problems. Low to sub-nanogram/mL sensitivity for proteins in plasma was achieved by one-step immunoaffinity depletion of 14 abundant plasma proteins prior to analysis. Median intra- and inter-laboratory reproducibility was <20%, sufficient for most biological studies and candidate protein biomarker verification. Digestion recovery of peptides was assessed and quantitative accuracy improved using heavy isotope labeled versions of the proteins as internal standards. Using the highly multiplexed assay, participating laboratories were able to precisely and reproducibly determine the levels of a series of analytes in blinded samples used to simulate an inter-laboratory clinical study of patient samples. Our study further establishes that LC-MRM-MS using stable isotope dilution, with appropriate attention to analytical validation and appropriate quality c`ontrol measures, enables sensitive, specific, reproducible and quantitative measurements of proteins and peptides in complex biological matrices such as plasma.
Resumo:
Automatic Vehicle Identification Systems are being increasingly used as a new source of travel information. As in the last decades these systems relied on expensive new technologies, few of them were scattered along a networks making thus Travel-Time and Average Speed estimation their main objectives. However, as their price dropped, the opportunity of building dense AVI networks arose, as in Brisbane where more than 250 Bluetooth detectors are now installed. As a consequence this technology represents an effective means to acquire accurate time dependant Origin Destination information. In order to obtain reliable estimations, however, a number of issues need to be addressed. Some of these problems stem from the structure of a network made out of isolated detectors itself while others are inherent of Bluetooth technology (overlapping detection area, missing detections,\...). The aim of this paper is threefold: First, after having presented the level of details that can be reached with a network of isolated detectors we present how we modelled Brisbane's network, keeping only the information valuable for the retrieval of trip information. Second, we give an overview of the issues inherent to the Bluetooth technology and we propose a method for retrieving the itineraries of the individual Bluetooth vehicles. Last, through a comparison with Brisbane Transport Strategic Model results, we highlight the opportunities and the limits of Bluetooth detectors networks. The aim of this paper is twofold. We first give a comprehensive overview of the aforementioned issues. Further, we propose a methodology that can be followed, in order to cleanse, correct and aggregate Bluetooth data. We postulate that the methods introduced by this paper are the first crucial steps that need to be followed in order to compute accurate Origin-Destination matrices in urban road networks.
Resumo:
Background There is a need for better understanding of the dispersion of classification-related variable to develop an evidence-based classification of athletes with a disability participating in stationary throwing events. Objectives The purposes of this study are (A) to describe tools designed to comprehend and represent the dispersion of the performance between successive classes, and (B) to present this dispersion for the elite male and female stationary shot-putters who participated in Beijing 2008 Paralympic Games. Study design Retrospective study Methods This study analysed a total of 479 attempts performed by 114 male and female stationary shot-putters in three F30s (F32-F34) and six F50s (F52-F58) classes during the course of eight events during Beijing 2008 Paralympic Games. Results The average differences of best performance were 1.46±0.46 m for males between F54 and F58 classes as well as 1.06±1.18 m for females between F55 and F58 classes. The results demonstrated a linear relationship between best performance and classification while revealing two male Gold Medallists in F33 and F52 classes were outliers. Conclusions This study confirms the benefits of the comparative matrices, performance continuum and dispersion plots to comprehend classification-related variables. The work presented here represents a stepping stone into biomechanical analyses of stationary throwers, particularly on the eve of the London 2012 Paralympic Games where new evidences could be gathered.
Resumo:
A facile and sensitive surface-enhanced Raman scattering substrate was prepared by controlled potentiostatic deposition of a closely packed single layer of gold nanostructures (AuNS) over a flat gold (pAu) platform. The nanometer scale inter-particle distance between the particles resulted in high population of ‘hot spots’ which enormously enhanced the scattered Raman photons. A renewed methodology was followed to precisely quantify the SERS substrate enhancement factor (SSEF) and it was estimated to be (2.2 ± 0.17) × 105. The reproducibility of the SERS signal acquired by the developed substrate was tested by establishing the relative standard deviation (RSD) of 150 repeated measurements from various locations on the substrate surface. A low RSD of 4.37 confirmed the homogeneity of the developed substrate. The sensitivity of pAu/AuNS was proven by determining 100 fM 2,4,6-trinitrotoluene (TNT) comfortably. As a proof of concept on the potential of the new pAu/AuNS substrate in field analysis, TNT in soil and water matrices was selectively detected after forming a Meisenheimer complex with cysteamine.
Resumo:
It is commonly accepted that regular moderate intensity physical activity reduces the risk of developing many diseases. Counter intuitively, however, evidence also exists for oxidative stress resulting from acute and strenuous exercise. Enhanced formation of reactive oxygen and nitrogen species may lead to oxidatively modified lipids, proteins and nucleic acids and possibly disease. Currently, only a few studies have investigated the influence of exercise on DNA stability and damage with conflicting results, small study groups and the use of different sample matrices or methods and result units. This is the first review to address the effect of exercise of various intensities and durations on DNA stability, focusing on human population studies. Furthermore, this article describes the principles and limitations of commonly used methods for the assessment of oxidatively modified DNA and DNA stability. This review is structured according to the type of exercise conducted (field or laboratory based) and the intensity performed (i.e. competitive ultra/endurance exercise or maximal tests until exhaustion). The findings presented here suggest that competitive ultra-endurance exercise (>4h) does not induce persistent DNA damage. However, when considering the effects of endurance exercise (<4h), no clear conclusions could be drawn. Laboratory studies have shown equivocal results (increased or no oxidative stress) after endurance or exhaustive exercise. To clarify which components of exercise participation (i.e. duration, intensity and training status of subjects) have an impact on DNA stability and damage, additional carefully designed studies combining the measurement of DNA damage, gene expression and DNA repair mechanisms before, during and after exercise of differing intensities and durations are required.
Resumo:
In this paper, we develop and validate a new Statistically Assisted Fluid Registration Algorithm (SAFIRA) for brain images. A non-statistical version of this algorithm was first implemented in [2] and re-formulated using Lagrangian mechanics in [3]. Here we extend this algorithm to 3D: given 3D brain images from a population, vector fields and their corresponding deformation matrices are computed in a first round of registrations using the non-statistical implementation. Covariance matrices for both the deformation matrices and the vector fields are then obtained and incorporated (separately or jointly) in the regularizing (i.e., the non-conservative Lagrangian) terms, creating four versions of the algorithm. We evaluated the accuracy of each algorithm variant using the manually labeled LPBA40 dataset, which provides us with ground truth anatomical segmentations. We also compared the power of the different algorithms using tensor-based morphometry -a technique to analyze local volumetric differences in brain structure- applied to 46 3D brain scans from healthy monozygotic twins.
Resumo:
We defined a new statistical fluid registration method with Lagrangian mechanics. Although several authors have suggested that empirical statistics on brain variation should be incorporated into the registration problem, few algorithms have included this information and instead use regularizers that guarantee diffeomorphic mappings. Here we combine the advantages of a large-deformation fluid matching approach with empirical statistics on population variability in anatomy. We reformulated the Riemannian fluid algorithmdeveloped in [4], and used a Lagrangian framework to incorporate 0 th and 1st order statistics in the regularization process. 92 2D midline corpus callosum traces from a twin MRI database were fluidly registered using the non-statistical version of the algorithm (algorithm 0), giving initial vector fields and deformation tensors. Covariance matrices were computed for both distributions and incorporated either separately (algorithm 1 and algorithm 2) or together (algorithm 3) in the registration. We computed heritability maps and two vector and tensorbased distances to compare the power and the robustness of the algorithms.
Resumo:
In this paper, we used a nonconservative Lagrangian mechanics approach to formulate a new statistical algorithm for fluid registration of 3-D brain images. This algorithm is named SAFIRA, acronym for statistically-assisted fluid image registration algorithm. A nonstatistical version of this algorithm was implemented, where the deformation was regularized by penalizing deviations from a zero rate of strain. In, the terms regularizing the deformation included the covariance of the deformation matrices Σ and the vector fields (q). Here, we used a Lagrangian framework to reformulate this algorithm, showing that the regularizing terms essentially allow nonconservative work to occur during the flow. Given 3-D brain images from a group of subjects, vector fields and their corresponding deformation matrices are computed in a first round of registrations using the nonstatistical implementation. Covariance matrices for both the deformation matrices and the vector fields are then obtained and incorporated (separately or jointly) in the nonconservative terms, creating four versions of SAFIRA. We evaluated and compared our algorithms' performance on 92 3-D brain scans from healthy monozygotic and dizygotic twins; 2-D validations are also shown for corpus callosum shapes delineated at midline in the same subjects. After preliminary tests to demonstrate each method, we compared their detection power using tensor-based morphometry (TBM), a technique to analyze local volumetric differences in brain structure. We compared the accuracy of each algorithm variant using various statistical metrics derived from the images and deformation fields. All these tests were also run with a traditional fluid method, which has been quite widely used in TBM studies. The versions incorporating vector-based empirical statistics on brain variation were consistently more accurate than their counterparts, when used for automated volumetric quantification in new brain images. This suggests the advantages of this approach for large-scale neuroimaging studies.