971 resultados para Seedling limitation
Resumo:
The design of a non-traditional cam and roller-follower mechanism is described here. In this mechanism, the roller-crank rather than the cam is used as the continuous input member, while both complete a full rotation in each revolution and remain in contact throughout. It is noted that in order to have the cam fully rotate for every full rotation of the roller-crank, the cam cannot be a closed profile, rather the roller traverses the open cam profile twice in each cycle. Using kinematic analysis, the angular velocity of the cam when the roller traverses the cam profile in one direction, is related to the angular velocity of the cam when the roller retraces its path on the cam in the other direction. Thus, one can specify any arbitrary function relating the motion of the cam to the motion of the roller-crank for only 180 degrees of rotation in the angular velocity space. The motion of the cam in the remaining portion is then automatically determined. In specifying the arbitrary motion, many desirable characteristics such as multiple dwells, low acceleration and jerk, etc., can be obtained. Useful design equations are derived for this purpose. Using the kinematic inversion technique, the cam profile is readily obtained once the motion is specified in the angular velocity space. The only limitation to the arbitrary motion specification is making sure that the transmission angle never gets too low, so that the force will be transmitted efficiently from roller to cam. This is addressed by incorporating a transmission index into the motion specification in the synthesis process. Consequently, in this method we can specify any arbitrary motion within a permissible rone, such that the transmission index is higher than the specified minimum value. Single-dwell, double-dwell and a long hesitation motion are used as examples to demonstrate the ffectiveness of the design method. Force closure using an optimally located spring and quasi-kinetostatic analysis are also discussed. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Early diagnosis of disease is important, because therapeutic intervention is most successful before it spread to the subject. The best health screenings method could be the blood test because the blood contains thousands of bio-molecules coming as by-products from the diseased part of the organism and would be non-invasive approach. The major limitation of this approach is the very low concentrations of the analytes need to be detected. Raman spectroscopy has been proven as one of the cutting edge technique applied in the field of histology, cytology and clinical chemistry. The primary obstacle of Raman spectroscopy is the low signal intensities. One of the promising approaches to overcome that is surface enhanced Raman spectroscopy (SERS) which has opened novel opportunities for chemical and biomedical analytics. Albumin is one of the most abundant proteins in blood, produced by liver. The state of albumin in serum determines the health of the liver and kidney. Serum albumin helps to transport many small molecules such as fatty acids, bilirubin, calcium, drugs through the blood. In this study, SERS is being used for the quantification and to understand of binding mechanism serum albumin.
Resumo:
Different medium access control (MAC) layer protocols, for example, IEEE 802.11 series and others are used in wireless local area networks. They have limitation in handling bulk data transfer applications, like video-on-demand, videoconference, etc. To avoid this problem a cooperative MAC protocol environment has been introduced, which enables the MAC protocol of a node to use its nearby nodes MAC protocol as and when required. We have found on various occasions that specified cooperative MAC establishes cooperative transmissions to send the specified data to the destination. In this paper we propose cooperative MAC priority (CoopMACPri) protocol which exploits the advantages of priority value given by the upper layers for selection of different paths to nodes running heterogeneous applications in a wireless ad hoc network environment. The CoopMACPri protocol improves the system throughput and minimizes energy consumption. Using a Markov chain model, we developed a model to analyse the performance of CoopMACPri protocol; and also derived closed-form expression of saturated system throughput and energy consumption. Performance evaluations validate the accuracy of the theoretical analysis, and also show that the performance of CoopMACPri protocol varies with the number of nodes. We observed that the simulation results and analysis reflects the effectiveness of the proposed protocol as per the specifications.
Resumo:
The occurrence of spurious solutions is a well-known limitation of the standard nodal finite element method when applied to electromagnetic problems. The two commonly used remedies that are used to address this problem are (i) The addition of a penalty term with the penalty factor based on the local dielectric constant, and which reduces to a Helmholtz form on homogeneous domains (regularized formulation); (ii) A formulation based on a vector and a scalar potential. Both these strategies have some shortcomings. The penalty method does not completely get rid of the spurious modes, and both methods are incapable of predicting singular eigenvalues in non-convex domains. Some non-zero spurious eigenvalues are also predicted by these methods on non-convex domains. In this work, we develop mixed finite element formulations which predict the eigenfrequencies (including their multiplicities) accurately, even for nonconvex domains. The main feature of the proposed mixed finite element formulation is that no ad-hoc terms are added to the formulation as in the penalty formulation, and the improvement is achieved purely by an appropriate choice of finite element spaces for the different variables. We show that the formulation works even for inhomogeneous domains where `double noding' is used to enforce the appropriate continuity requirements at an interface. For two-dimensional problems, the shape of the domain can be arbitrary, while for the three-dimensional ones, with our current formulation, only regular domains (which can be nonconvex) can be modeled. Since eigenfrequencies are modeled accurately, these elements also yield accurate results for driven problems. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Resonant sensors and crystal oscillators for mass detection need to be excited at very high natural frequencies (MHz). Use of such systems to measure mass of biological materials affects the accuracy of mass measurement due to their viscous and/or viscoelastic properties. The measurement limitation of such sensor system is the difficulty in accounting for the ``missing mass'' of the biological specimen in question. A sensor system has been developed in this work, to be operated in the stiffness controlled region at very low frequencies as compared to its fundamental natural frequency. The resulting reduction in the sensitivity due to non-resonant mode of operation of this sensor is compensated by the high resolution of the sensor. The mass of different aged drosophila melanogaster (fruit fly) is measured. The difference in its mass measurement during resonant mode of operation is also presented. That, viscosity effects do not affect the working of this non-resonant mass sensor is clearly established by direct comparison. (C) 2014 AIP Publishing LLC.
Resumo:
A critical limitation that has hampered widespread application of the electrically conducting reduced graphene oxide (r-GO) is its poor aqueous dispersibility. Here we outline a strategy to obtain water-dispersible conducting r-GO sheets, free of any stabilizing agents, by exploiting the fact that the kinetics of the photoreduction of the insulating GO is heterogeneous. We show that by controlling UV exposure times and pH, we can obtain r-GO sheets with the conducting sp(2)-graphitic domains restored but with the more acidic carboxylic groups, responsible for aqueous dispersibility, intact. The resultant photoreduced r-GO sheets are both conducting and water-dispersible.
Resumo:
Accumulating evidence suggests that deposition of neurotoxic a-synuclein aggregates in the brain during the development of neurodegenerative diseases like Parkinson's disease can be curbed by anti-aggregation strategies that either disrupt or eliminate toxic aggregates. Curcumin, a dietary polyphenol exhibits anti-amyloid activity but the use of this polyphenol is limited owing to its instability. As chemical modifications in curcumin confiscate this limitation, such efforts are intensively performed to discover molecules with similar but enhanced stability and superior properties. This study focuses on the inhibitory effect of two stable analogs of curcumin viz. curcumin pyrazole and curcumin isoxazole and their derivatives against a-synuclein aggregation, fibrillization and toxicity. Employing biochemical, biophysical and cell based assays we discovered that curcumin pyrazole (3) and its derivative N-(3-Nitrophenylpyrazole) curcumin (15) exhibit remarkable potency in not only arresting fibrillization and disrupting preformed fibrils but also preventing formation of A11 conformation in the protein that imparts toxic effects. Compounds 3 and 15 also decreased neurotoxicity associated with fast aggregating A53T mutant form of a-synuclein. These two analogues of curcumin described here may therefore be useful therapeutic inhibitors for the treatment of a-synuclein amyloidosis and toxicity in Parkinson's disease and other synucleinopathies.
Resumo:
X-ray Photoelectron Spectroscopy (XPS) plays a central role in the investigation of electronic properties as well as compositional analysis of almost every conceivable material. However, a very short inelastic mean free path (IMFP) and the limited photon flux in standard laboratory conditions render this technique very much surface sensitive. Thus, the electronic structure buried below several layers of a heterogeneous sample is not accessible with usual photoemission techniques. An obvious way to overcome this limitation is to use a considerably higher energy photon source, as this increases the IMFP of the photo-ejected electron, thereby making the technique more depth and bulk sensitive. Due to this obvious advantage, Hard X-ray Photo Electron Spectroscopy (HAXPES) is rapidly becoming an extremely powerful tool for chemical, elemental, compositional and electronic characterization of bulk systems, more so with reference to systems characterized by the presence of buried interfaces and other types of chemical heterogeneity. The relevance of such an investigative tool becomes evident when we specifically note the ever-increasing importance of heterostructures and interfaces in the context of a wide range of device applications, spanning electronic, magnetic, optical and energy applications. The interest in this nondestructive, element specific HAXPES technique has grown rapidly in the past few years; we discuss critically its extensive use in the study of depth resolved electronic properties of nanocrystals, multilayer superlattices and buried interfaces, revealing their internal structures. We specifically present a comparative discussion, with examples, on two most commonly used methods to determine internal structures of heterostructured systems using XPS. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Affine transformations have proven to be very powerful for loop restructuring due to their ability to model a very wide range of transformations. A single multi-dimensional affine function can represent a long and complex sequence of simpler transformations. Existing affine transformation frameworks like the Pluto algorithm, that include a cost function for modern multicore architectures where coarse-grained parallelism and locality are crucial, consider only a sub-space of transformations to avoid a combinatorial explosion in finding the transformations. The ensuing practical tradeoffs lead to the exclusion of certain useful transformations, in particular, transformation compositions involving loop reversals and loop skewing by negative factors. In this paper, we propose an approach to address this limitation by modeling a much larger space of affine transformations in conjunction with the Pluto algorithm's cost function. We perform an experimental evaluation of both, the effect on compilation time, and performance of generated codes. The evaluation shows that our new framework, Pluto+, provides no degradation in performance in any of the Polybench benchmarks. For Lattice Boltzmann Method (LBM) codes with periodic boundary conditions, it provides a mean speedup of 1.33x over Pluto. We also show that Pluto+ does not increase compile times significantly. Experimental results on Polybench show that Pluto+ increases overall polyhedral source-to-source optimization time only by 15%. In cases where it improves execution time significantly, it increased polyhedral optimization time only by 2.04x.
Resumo:
Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms.
Resumo:
Clock synchronization is highly desirable in distributed systems, including many applications in the Internet of Things and Humans. It improves the efficiency, modularity, and scalability of the system, and optimizes use of event triggers. For IoTH, BLE - a subset of the recent Bluetooth v4.0 stack - provides a low-power and loosely coupled mechanism for sensor data collection with ubiquitous units (e.g., smartphones and tablets) carried by humans. This fundamental design paradigm of BLE is enabled by a range of broadcast advertising modes. While its operational benefits are numerous, the lack of a common time reference in the broadcast mode of BLE has been a fundamental limitation. This article presents and describes CheepSync, a time synchronization service for BLE advertisers, especially tailored for applications requiring high time precision on resource constrained BLE platforms. Designed on top of the existing Bluetooth v4.0 standard, the CheepSync framework utilizes low-level time-stamping and comprehensive error compensation mechanisms for overcoming uncertainties in message transmission, clock drift, and other system-specific constraints. CheepSync was implemented on custom designed nRF24Cheep beacon platforms (as broadcasters) and commercial off-the-shelf Android ported smartphones (as passive listeners). We demonstrate the efficacy of CheepSync by numerous empirical evaluations in a variety of experimental setups, and show that its average (single-hop) time synchronization accuracy is in the 10 mu s range.
Resumo:
Since streaming data keeps coming continuously as an ordered sequence, massive amounts of data is created. A big challenge in handling data streams is the limitation of time and space. Prototype selection on streaming data requires the prototypes to be updated in an incremental manner as new data comes in. We propose an incremental algorithm for prototype selection. This algorithm can also be used to handle very large datasets. Results have been presented on a number of large datasets and our method is compared to an existing algorithm for streaming data. Our algorithm saves time and the prototypes selected gives good classification accuracy.
Resumo:
Up to now, high-resolution mapping of surface water extent from satellites has only been available for a few regions, over limited time periods. The extension of the temporal and spatial coverage was difficult, due to the limitation of the remote sensing technique e.g., the interaction of the radiation with vegetation or cloud for visible observations or the temporal sampling with the synthetic aperture radar (SAR)]. The advantages and the limitations of the various satellite techniques are reviewed. The need to have a global and consistent estimate of the water surfaces over long time periods triggered the development of a multi-satellite methodology to obtain consistent surface water all over the globe, regardless of the environments. The Global Inundation Extent from Multi-satellites (GIEMS) combines the complementary strengths of satellite observations from the visible to the microwave, to produce a low-resolution monthly dataset () of surface water extent and dynamics. Downscaling algorithms are now developed and applied to GIEMS, using high-spatial-resolution information from visible, near-infrared, and synthetic aperture radar (SAR) satellite images, or from digital elevation models. Preliminary products are available down to 500-m spatial resolution. This work bridges the gaps and prepares for the future NASA/CNES Surface Water Ocean Topography (SWOT) mission to be launched in 2020. SWOT will delineate surface water extent estimates and their water storage with an unprecedented spatial resolution and accuracy, thanks to a SAR in an interferometry mode. When available, the SWOT data will be adopted to downscale GIEMS, to produce a long time series of water surfaces at global scale, consistent with the SWOT observations.
Resumo:
Exploring future cathode materials for sodium-ion batteries, alluaudite class of Na2Fe2II(SO4)(3) has been recently unveiled as a 3.8 V positive insertion candidate (Barpanda et al. Nat. Commun. 2014, 5, 4358). It forms an Fe-based polyanionic compound delivering the highest Fe-redox potential along with excellent rate kinetics and reversibility. However, like all known SO4-based insertion materials, its synthesis is cumbersome that warrants careful processing avoiding any aqueous exposure. Here, an alternate low temperature ionothermal synthesis has been described to produce the alluaudite Na2+2xFe2-xII(SO4)(3). It marks the first demonstration of solvothermal synthesis of alluaudite Na2+2xM2-xII(SO4)(3) (M = 3d metals) family of cathodes. Unlike classical solid-state route, this solvothermal route favors sustainable synthesis of homogeneous nanostructured alluaudite products at only 300 degrees C, the lowest temperature value until date. The current work reports the synthetic aspects of pristine and modified ionothermal synthesis of Na2+2xFe2-xII(SO4)(3) having tunable size (300 nm similar to 5 mu m) and morphology. It shows antiferromagnetic ordering below 12 K. A reversible capacity in excess of 80 mAh/g was obtained with good rate kinetics and cycling stability over 50 cycles. Using a synergistic approach combining experimental and ab initio DFT analysis, the structural, magnetic, electronic, and electrochemical properties and the structural limitation to extract full capacity have been described.
Resumo:
We propose a completely automatic approach for recognizing low resolution face images captured in uncontrolled environment. The approach uses multidimensional scaling to learn a common transformation matrix for the entire face which simultaneously transforms the facial features of the low resolution and the high resolution training images such that the distance between them approximates the distance had both the images been captured under the same controlled imaging conditions. Stereo matching cost is used to obtain the similarity of two images in the transformed space. Though this gives very good recognition performance, the time taken for computing the stereo matching cost is significant. To overcome this limitation, we propose a reference-based approach in which each face image is represented by its stereo matching cost from a few reference images. Experimental evaluation on the real world challenging databases and comparison with the state-of-the-art super-resolution, classifier based and cross modal synthesis techniques show the effectiveness of the proposed algorithm.