936 resultados para Filtered SVD
Resumo:
Climate and environmental reconstructions from natural archives are important for the interpretation of current climatic change. Few quantitative high-resolution reconstructions exist for South America which is the only land mass extending from the tropics to the southern high latitudes at 56°S. We analyzed sediment cores from two adjacent lakes in Northern Chilean Patagonia, Lago Castor (45°36′S, 71°47′W) and Laguna Escondida (45°31′S, 71°49′W). Radiometric dating (210Pb, 137Cs, 14C-AMS) suggests that the cores reach back to c. 900 BC (Laguna Escondida) and c. 1900 BC (Lago Castor). Both lakes show similarities and reproducibility in sedimentation rate changes and tephra layer deposition. We found eight macroscopic tephras (0.2–5.5 cm thick) dated at 1950 BC, 1700 BC, at 300 BC, 50 BC, 90 AD, 160 AD, 400 AD and at 900 AD. These can be used as regional time-synchronous stratigraphic markers. The two thickest tephras represent known well-dated explosive eruptions of Hudson volcano around 1950 and 300 BC. Biogenic silica flux revealed in both lakes a climate signal and correlation with annual temperature reanalysis data (calibration 1900–2006 AD; Lago Castor r = 0.37; Laguna Escondida r = 0.42, seven years filtered data). We used a linear inverse regression plus scaling model for calibration and leave-one-out cross-validation (RMSEv = 0.56 °C) to reconstruct sub decadal-scale temperature variability for Laguna Escondida back to AD 400. The lower part of the core from Laguna Escondida prior to AD 400 and the core of Lago Castor are strongly influenced by primary and secondary tephras and, therefore, not used for the temperature reconstruction. The temperature reconstruction from Laguna Escondida shows cold conditions in the 5th century (relative to the 20th century mean), warmer temperatures from AD 600 to AD 1150 and colder temperatures from AD 1200 to AD 1450. From AD 1450 to AD 1700 our reconstruction shows a period with stronger variability and on average higher values than the 20th century mean. Until AD 1900 the temperature values decrease but stay slightly above the 20th century mean. Most of the centennial-scale features are reproduced in the few other natural climate archives in the region. The early onset of cool conditions from c. AD 1200 onward seems to be confirmed for this region.
Resumo:
Neurons generate spikes reliably with millisecond precision if driven by a fluctuating current--is it then possible to predict the spike timing knowing the input? We determined parameters of an adapting threshold model using data recorded in vitro from 24 layer 5 pyramidal neurons from rat somatosensory cortex, stimulated intracellularly by a fluctuating current simulating synaptic bombardment in vivo. The model generates output spikes whenever the membrane voltage (a filtered version of the input current) reaches a dynamic threshold. We find that for input currents with large fluctuation amplitude, up to 75% of the spike times can be predicted with a precision of +/-2 ms. Some of the intrinsic neuronal unreliability can be accounted for by a noisy threshold mechanism. Our results suggest that, under random current injection into the soma, (i) neuronal behavior in the subthreshold regime can be well approximated by a simple linear filter; and (ii) most of the nonlinearities are captured by a simple threshold process.
Resumo:
We establish a fundamental equivalence between singular value decomposition (SVD) and functional principal components analysis (FPCA) models. The constructive relationship allows to deploy the numerical efficiency of SVD to fully estimate the components of FPCA, even for extremely high-dimensional functional objects, such as brain images. As an example, a functional mixed effect model is fitted to high-resolution morphometric (RAVENS) images. The main directions of morphometric variation in brain volumes are identified and discussed.
Resumo:
ABSTRACT: Nanotechnology in its widest sense seeks to exploit the special biophysical and chemical properties of materials at the nanoscale. While the potential technological, diagnostic or therapeutic applications are promising there is a growing body of evidence that the special technological features of nanoparticulate material are associated with biological effects formerly not attributed to the same materials at a larger particle scale. Therefore, studies that address the potential hazards of nanoparticles on biological systems including human health are required. Due to its large surface area the lung is one of the major sites of interaction with inhaled nanoparticles. One of the great challenges of studying particle-lung interactions is the microscopic visualization of nanoparticles within tissues or single cells both in vivo and in vitro. Once a certain type of nanoparticle can be identified unambiguously using microscopic methods it is desirable to quantify the particle distribution within a cell, an organ or the whole organism. Transmission electron microscopy provides an ideal tool to perform qualitative and quantitative analyses of particle-related structural changes of the respiratory tract, to reveal the localization of nanoparticles within tissues and cells and to investigate the 3D nature of nanoparticle-lung interactions.This article provides information on the applicability, advantages and disadvantages of electron microscopic preparation techniques and several advanced transmission electron microscopic methods including conventional, immuno and energy-filtered electron microscopy as well as electron tomography for the visualization of both model nanoparticles (e.g. polystyrene) and technologically relevant nanoparticles (e.g. titanium dioxide). Furthermore, we highlight possibilities to combine light and electron microscopic techniques in a correlative approach. Finally, we demonstrate a formal quantitative, i.e. stereological approach to analyze the distributions of nanoparticles in tissues and cells.This comprehensive article aims to provide a basis for scientists in nanoparticle research to integrate electron microscopic analyses into their study design and to select the appropriate microscopic strategy.
Resumo:
We evaluated 4 men who had benign paroxysmal positional vertigo (BPPV) that occured several hours after intensive mountain biking but without head trauma. The positional maneuvers in the planes of the posterior and horizontal canals elicited BPPV, as well as transitory nystagmus. This was attributed to both the posterior and horizontal semicircular canals (SCCs) on the left side in 1 patient, in these 2 SCCs on the right side in another patient, and to the right posterior SCC in the other 2 patients. The symptoms disappeared after physiotherapeutic maneuvers in 2 patients and spontaneously in the other 2 patients. Cross-country or downhill mountain biking generates frequent vibratory impacts, which are only partially filtered through the suspension fork and the upper parts of the body. Biomechanically, during a moderate jump, before landing, the head is subjected to an acceleration close to negative 1 g, and during impact it is subjected to an upward acceleration of more than 2g. Repeated acceleration-deceleration events during intensive off-road biking might generate displacement and/or dislocation of otoconia from the otolithic organs, inducing the typical symptoms of BPPV. This new cause of posttraumatic BPPV should be considered as an injury of minor severity attributed to the practice of mountain biking.
Resumo:
Light-frame wood buildings are widely built in the United States (U.S.). Natural hazards cause huge losses to light-frame wood construction. This study proposes methodologies and a framework to evaluate the performance and risk of light-frame wood construction. Performance-based engineering (PBE) aims to ensure that a building achieves the desired performance objectives when subjected to hazard loads. In this study, the collapse risk of a typical one-story light-frame wood building is determined using the Incremental Dynamic Analysis method. The collapse risks of buildings at four sites in the Eastern, Western, and Central regions of U.S. are evaluated. Various sources of uncertainties are considered in the collapse risk assessment so that the influence of uncertainties on the collapse risk of lightframe wood construction is evaluated. The collapse risks of the same building subjected to maximum considered earthquakes at different seismic zones are found to be non-uniform. In certain areas in the U.S., the snow accumulation is significant and causes huge economic losses and threatens life safety. Limited study has been performed to investigate the snow hazard when combined with a seismic hazard. A Filtered Poisson Process (FPP) model is developed in this study, overcoming the shortcomings of the typically used Bernoulli model. The FPP model is validated by comparing the simulation results to weather records obtained from the National Climatic Data Center. The FPP model is applied in the proposed framework to assess the risk of a light-frame wood building subjected to combined snow and earthquake loads. The snow accumulation has a significant influence on the seismic losses of the building. The Bernoulli snow model underestimates the seismic loss of buildings in areas with snow accumulation. An object-oriented framework is proposed in this study to performrisk assessment for lightframe wood construction. For home owners and stake holders, risks in terms of economic losses is much easier to understand than engineering parameters (e.g., inter story drift). The proposed framework is used in two applications. One is to assess the loss of the building subjected to mainshock-aftershock sequences. Aftershock and downtime costs are found to be important factors in the assessment of seismic losses. The framework is also applied to a wood building in the state of Washington to assess the loss of the building subjected to combined earthquake and snow loads. The proposed framework is proven to be an appropriate tool for risk assessment of buildings subjected to multiple hazards. Limitations and future works are also identified.
Resumo:
In recent years, growing attention has been devoted to the use of lignocellulosic biomass as a feedstock to produce renewable carbohydrates as a source of energy products, including liquid alternatives to fossil fuels. The benefits of developing woody biomass to ethanol technology are to increase the long-term national energy security, reduce fossil energy consumption, lower greenhouse gas emissions, use renewable rather than depletable resources, and create local jobs. Currently, research is driven by the need to reduce the cost of biomass-ethanol production. One of the preferred methods is to thermochemically pretreat the biomass material and subsequently, enzymatically hydrolyze the pretreated material to fermentable sugars that can then be converted to ethanol using specialized microorganisms. The goals of pretreatment are to remove the hemicellulose fraction from other biomass components, reduce bioconversion time, enhance enzymatic conversion of the cellulose fraction, and, hopefully, obtain a higher ethanol yield. The primary goal of this research is to obtain kinetic detailed data for dilute acid hydrolysis for several timber species from the Upper Peninsula of Michigan and switchgrass. These results will be used to identify optimum reaction conditions to maximize production of fermentable sugars and minimize production of non-fermentable byproducts. The structural carbohydrate analysis of the biomass species used in this project was performed using the procedure proposed by National Renewable Energy Laboratory (NREL). Subsequently, dilute acid-catalyzed hydrolysis of biomass, including aspen, basswood, balsam, red maple, and switchgrass, was studied at various temperatures, acid concentrations, and particle sizes in a 1-L well-mixed batch reactor (Parr Instruments, ii Model 4571). 25 g of biomass and 500 mL of diluted acid solution were added into a 1-L glass liner, and then put into the reactor. During the experiment, 5 mL samples were taken starting at 100°C at 3 min intervals until reaching the targeted temperature (160, 175, or 190°C), followed by 4 samples after achieving the desired temperature. The collected samples were then cooled in an ice bath immediately to stop the reaction. The cooled samples were filtered using 0.2 μm MILLIPORE membrane filter to remove suspended solids. The filtered samples were then analyzed using High Performance Liquid Chromatography (HPLC) with a Bio-Rad Aminex HPX-87P column, and refractive index detection to measure monomeric and polymeric sugars plus degradation byproducts. A first order reaction model was assumed and the kinetic parameters such as activation energy and pre-exponential factor from Arrhenius equation were obtained from a match between the model and experimental data. The reaction temperature increases linearly after 40 minutes during experiments. Xylose and other sugars were formed from hemicellulose hydrolysis over this heat up period until a maximum concentration was reached at the time near when the targeted temperature was reached. However, negligible amount of xylose byproducts and small concentrations of other soluble sugars, such as mannose, arabinose, and galactose were detected during this initial heat up period. Very little cellulose hydrolysis yielding glucose was observed during the initial heat up period. On the other hand, later in the reaction during the constant temperature period xylose was degraded to furfural. Glucose production from cellulose was increased during this constant temperature period at later time points in the reaction. The kinetic coefficient governing the generation of xylose from hemicellulose and the generation of furfural from xylose presented a coherent dependence on both temperature and acid concentration. However, no effect was observed in the particle size. There were three types of biomass used in this project; hardwood (aspen, basswood, and red maple), softwood (balsam), and a herbaceous crop (switchgrass). The activation energies and the pre-exponential factors of the timber species and switchgrass were in a range of 49 - 180 kJ/mol and from 7.5x104 - 2.6x1020 min-1, respectively, for the xylose formation model. In addition, for xylose degradation, the activation energies and the preexponential factors ranged from 130 - 170 kJ/mol and from 6.8x1013 - 3.7x1017 min-1, respectively. The results compare favorably with the literature values given by Ranganathan et al, 1985. Overall, up to 92 % of the xylose was able to generate from the dilute acid hydrolysis in this project.
Resumo:
These investigations will discuss the operational noise caused by automotive torque converters during speed ratio operation. Two specific cases of torque converter noise will be studied; cavitation, and a monotonic turbine induced noise. Cavitation occurs at or near stall, or zero turbine speed. The bubbles produced due to the extreme torques at low speed ratio operation, upon collapse, may cause a broadband noise that is unwanted by those who are occupying the vehicle as other portions of the vehicle drive train improve acoustically. Turbine induced noise, which occurs at high engine torque at around 0.5 speed ratio, is a narrow-band phenomenon that is audible to vehicle occupants currently. The solution to the turbine induced noise is known, however this study is to gain a better understanding of the mechanics behind this occurrence. The automated torque converter dynamometer test cell was utilized in these experiments to determine the effect of torque converter design parameters on the offset of cavitation and to employ the use a microwave telemetry system to directly measure pressures and structural motion on the turbine. Nearfield acoustics were used as a detection method for all phenomena while using a standardized speed ratio sweep test. Changes in filtered sound pressure levels enabled the ability to detect cavitation desinence. This, in turn, was utilized to determine the effects of various torque converter design parameters, including diameter, torus dimensions, and pump and stator blade designs on cavitation. The on turbine pressures and motion measured with the microwave telemetry were used to understand better the effects of a notched trailing edge turbine blade on the turbine induced noise.
Resumo:
A basic approach to study a NVH problem is to break down the system in three basic elements – source, path and receiver. While the receiver (response) and the transfer path can be measured, it is difficult to measure the source (forces) acting on the system. It becomes necessary to predict these forces to know how they influence the responses. This requires inverting the transfer path. Singular Value Decomposition (SVD) method is used to decompose the transfer path matrix into its principle components which is required for the inversion. The usual approach to force prediction requires rejecting the small singular values obtained during SVD by setting a threshold, as these small values dominate the inverse matrix. This assumption of the threshold may be subjected to rejecting important singular values severely affecting force prediction. The new approach discussed in this report looks at the column space of the transfer path matrix which is the basis for the predicted response. The response participation is an indication of how the small singular values influence the force participation. The ability to accurately reconstruct the response vector is important to establish a confidence in force vector prediction. The goal of this report is to suggest a solution that is mathematically feasible, physically meaningful, and numerically more efficient through examples. This understanding adds new insight to the effects of current code and how to apply algorithms and understanding to new codes.
Boron nitride nanotubes : synthesis, characterization, functionalization, and potential applications
Resumo:
Boron nitride nanotubes (BNNTs) are structurally similar to carbon nanotubes (CNTs), but exhibit completely different physical and chemical properties. Thus, BNNTs with various interesting properties may be complementary to CNTs and provide an alternative perspective to be useful in different applications. However, synthesis of high quality of BNNTs is still challenging. Hence, the major goals of this research work focus on the fundamental study of synthesis, characterizations, functionalization, and explorations of potential applications. In this work, we have established a new growth vapor trapping (GVT) approach to produce high quality and quantity BNNTs on a Si substrate, by using a conventional tube furnace. This chemical vapor deposition (CVD) approach was conducted at a growth temperature of 1200 °C. As compared to other known approaches, our GVT technique is much simpler in experimental setup and requires relatively lower growth temperatures. The as-grown BNNTs are fully characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), electron energy loss spectroscopy (EELS), Energy Filtered Mapping, Raman spectroscopy, Fourier Transform Infra Red spectroscopy (FTIR), UV-Visible (UV-vis) absorption spectroscopy, etc. Following this success, the growth of BNNTs is now as convenient as growing CNTs and ZnO nanowires. Some important parameters have been identified to produce high-quality BNNTs on Si substrates. Furthermore, we have identified a series of effective catalysts for patterned growth of BNNTs at desirable or pre-defined locations. This catalytic CVD technique is achieved based on our finding that MgO, Ni or Fe are the good catalysts for the growth of BNNTs. The success of patterned growth not only explains the role of catalysts in the formation of BNNTs, this technique will also become technologically important for future device fabrication of BNNTs. Following our success in controlled growth of BNNTs on substrates, we have discovered the superhydrophobic behavior of these partially vertically aligned BNNTs. Since BNNTs are chemically inert, resistive to oxidation up to ~1000°C, and transparent to UV-visible light, our discovery suggests that BNNTs could be useful as self-cleaning, insulating and protective coatings under rigorous chemical and thermal conditions. We have also established various approaches to functionalize BNNTs with polymeric molecules and carbon coatings. First, we showed that BNNTs can be functionalized by mPEG-DSPE (Polyethylene glycol-1,2-distearoyl-sn-glycero-3-phosphoethanolamine), a bio-compatible polymer that helps disperse and dissolve BNNTs in water solution. Furthermore, well-dispersed BNNTs in water can be cut from its original length of >10µm to(>20hrs). This success is an essential step to implement BNNTs in biomedical applications. On the other hand, we have also succeeded to functionalize BNNTs with various conjugated polymers. This success enables the dispersion of BNNTs in organic solvents instead of water. Our approaches are useful for applications of BNNTs in high-strength composites. In addition, we have also functionalized BNNTs with carbon decoration. This was performed by introducing methane (CH4) gas into the growth process of BNNT. Graphitic carbon coatings can be deposited on the side wall of BNNTs with thicknesses ranging from 2 to 5 nm. This success can modulate the conductivity of pure BNNTs from insulating to weakly electrically conductive. Finally, efforts were devoted to explore the application of the wide bandgap BNNTs in solar-blind deep UV (DUV) photo-detectors. We found that photoelectric current generated by the DUV light was dominated in the microelectrodes of our devices. The contribution of photocurrent from BNNTs is not significant if there is any. Implication from these preliminary experiments and potential future work are discussed.
Resumo:
This thesis develops high performance real-time signal processing modules for direction of arrival (DOA) estimation for localization systems. It proposes highly parallel algorithms for performing subspace decomposition and polynomial rooting, which are otherwise traditionally implemented using sequential algorithms. The proposed algorithms address the emerging need for real-time localization for a wide range of applications. As the antenna array size increases, the complexity of signal processing algorithms increases, making it increasingly difficult to satisfy the real-time constraints. This thesis addresses real-time implementation by proposing parallel algorithms, that maintain considerable improvement over traditional algorithms, especially for systems with larger number of antenna array elements. Singular value decomposition (SVD) and polynomial rooting are two computationally complex steps and act as the bottleneck to achieving real-time performance. The proposed algorithms are suitable for implementation on field programmable gated arrays (FPGAs), single instruction multiple data (SIMD) hardware or application specific integrated chips (ASICs), which offer large number of processing elements that can be exploited for parallel processing. The designs proposed in this thesis are modular, easily expandable and easy to implement. Firstly, this thesis proposes a fast converging SVD algorithm. The proposed method reduces the number of iterations it takes to converge to correct singular values, thus achieving closer to real-time performance. A general algorithm and a modular system design are provided making it easy for designers to replicate and extend the design to larger matrix sizes. Moreover, the method is highly parallel, which can be exploited in various hardware platforms mentioned earlier. A fixed point implementation of proposed SVD algorithm is presented. The FPGA design is pipelined to the maximum extent to increase the maximum achievable frequency of operation. The system was developed with the objective of achieving high throughput. Various modern cores available in FPGAs were used to maximize the performance and details of these modules are presented in detail. Finally, a parallel polynomial rooting technique based on Newton’s method applicable exclusively to root-MUSIC polynomials is proposed. Unique characteristics of root-MUSIC polynomial’s complex dynamics were exploited to derive this polynomial rooting method. The technique exhibits parallelism and converges to the desired root within fixed number of iterations, making this suitable for polynomial rooting of large degree polynomials. We believe this is the first time that complex dynamics of root-MUSIC polynomial were analyzed to propose an algorithm. In all, the thesis addresses two major bottlenecks in a direction of arrival estimation system, by providing simple, high throughput, parallel algorithms.
Resumo:
The municipality of San Juan La Laguna, Guatemala is home to approximately 5,200 people and located on the western side of the Lake Atitlán caldera. Steep slopes surround all but the eastern side of San Juan. The Lake Atitlán watershed is susceptible to many natural hazards, but most predictable are the landslides that can occur annually with each rainy season, especially during high-intensity events. Hurricane Stan hit Guatemala in October 2005; the resulting flooding and landslides devastated the Atitlán region. Locations of landslide and non-landslide points were obtained from field observations and orthophotos taken following Hurricane Stan. This study used data from multiple attributes, at every landslide and non-landslide point, and applied different multivariate analyses to optimize a model for landslides prediction during high-intensity precipitation events like Hurricane Stan. The attributes considered in this study are: geology, geomorphology, distance to faults and streams, land use, slope, aspect, curvature, plan curvature, profile curvature and topographic wetness index. The attributes were pre-evaluated for their ability to predict landslides using four different attribute evaluators, all available in the open source data mining software Weka: filtered subset, information gain, gain ratio and chi-squared. Three multivariate algorithms (decision tree J48, logistic regression and BayesNet) were optimized for landslide prediction using different attributes. The following statistical parameters were used to evaluate model accuracy: precision, recall, F measure and area under the receiver operating characteristic (ROC) curve. The algorithm BayesNet yielded the most accurate model and was used to build a probability map of landslide initiation points. The probability map developed in this study was also compared to the results of a bivariate landslide susceptibility analysis conducted for the watershed, encompassing Lake Atitlán and San Juan. Landslides from Tropical Storm Agatha 2010 were used to independently validate this study’s multivariate model and the bivariate model. The ultimate aim of this study is to share the methodology and results with municipal contacts from the author's time as a U.S. Peace Corps volunteer, to facilitate more effective future landslide hazard planning and mitigation.
Resumo:
The widespread of low cost embedded electronics makes it easier to implement the smart devices that can understand either the environment or the user behaviors. The main object of this project is to design and implement home use portable smart electronics, including the portable monitoring device for home and office security and the portable 3D mouse for convenient use. Both devices in this project use the MPU6050 which contains a 3 axis accelerometer and a 3 axis gyroscope to sense the inertial motion of the door or the human hands movement. For the portable monitoring device for home and office security, MPU6050 is used to sense the door (either home front door or cabinet door) movement through the gyroscope, and Raspberry Pi is then used to process the data it receives from MPU6050, if the data value exceeds the preset threshold, Raspberry Pi would control the USB Webcam to take a picture and then send out an alert email with the picture to the user. The advantage of this device is that it is a small size portable stand-alone device with its own power source, it is easy to implement, really cheap for residential use, and energy efficient with instantaneous alert. For the 3D mouse, the MPU6050 would use both the accelerometer and gyroscope to sense user hands movement, the data are processed by MSP430G2553 through a digital smooth filter and a complementary filter, and then the filtered data will pass to the personal computer through the serial COM port. By applying the cursor movement equation in the PC driver, this device can work great as a mouse with acceptable accuracy. Compared to the normal optical mouse we are using, this mouse does not need any working surface, with the use of the smooth and complementary filter, it has certain accuracy for normal use, and it is easy to be extended to a portable mouse as small as a finger ring.
Resumo:
The general model The aim of this chapter is to introduce a structured overview of the different possibilities available to display and analyze brain electric scalp potentials. First, a general formal model of time-varying distributed EEG potentials is introduced. Based on this model, the most common analysis strategies used in EEG research are introduced and discussed as specific cases of this general model. Both the general model and particular methods are also expressed in mathematical terms. It is however not necessary to understand these terms to understand the chapter. The general model that we propose here is based on the statement made in Chapter 3, stating that the electric field produced by active neurons in the brain propagates in brain tissue without delay in time. Contrary to other imaging methods that are based on hemodynamic or metabolic processes, the EEG scalp potentials are thus “real-time,” not delayed and not a-priori frequency-filtered measurements. If only a single dipolar source in the brain were active, the temporal dynamics of the activity of that source would be exactly reproduced by the temporal dynamics observed in the scalp potentials produced by that source. This is illustrated in Figure 5.1, where the expected EEG signal of a single source with spindle-like dynamics in time has been computed. The dynamics of the scalp potentials exactly reproduce the dynamics of the source. The amplitude of the measured potentials depends on the relation between the location and orientation of the active source, its strength and the electrode position.
Resumo:
In this paper, we investigate how a multilinear model can be used to represent human motion data. Based on technical modes (referring to degrees of freedom and number of frames) and natural modes that typically appear in the context of a motion capture session (referring to actor, style, and repetition), the motion data is encoded in form of a high-order tensor. This tensor is then reduced by using N-mode singular value decomposition. Our experiments show that the reduced model approximates the original motion better then previously introduced PCA-based approaches. Furthermore, we discuss how the tensor representation may be used as a valuable tool for the synthesis of new motions.