962 resultados para Maximum entropy method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hydroethanolic extracts of C. langsdorffii leaves have therapeutic potential. This work reports a validated chromatographic method for the quantification of polar compounds in the hydroethanolic extract of C. langsdorffii leaves. A reliable HPLC method was developed using two monolithic columns linked in series (100 x 4.6 mm - C-18), with nonlinear gradient elution, and UV detection set at 257 nm. A procedure for the extraction of flavonols was also developed, which involved the use of 70% aqueous ethanol and the addition of benzophenone as the internal standard. The developed method led to a good detection response as the values for linearity were between 10.3 and 1000 mu g/mL, and those for recovery between 84.2 and 111.1%. The detection limit ranged from 0.02 to 1.70 mu g/mL and the quantitation limit from 0.07 to 5.1 mu g/mL, with a maximum RSD of 5.24%. Five compounds, rutin, quercetin-3-O-alpha-L-rhamnopyranoside, kaempferol-3-O-alpha-L-rhamnopyranoside, quercetin and kaempferol, were quantified. This method could, therefore, be used for the quality control of hydroethanolic extracts of Copaifera leaves and their cosmetic and pharmaceutical products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. Convergent point (CP) search methods are important tools for studying the kinematic properties of open clusters and young associations whose members share the same spatial motion. Aims. We present a new CP search strategy based on proper motion data. We test the new algorithm on synthetic data and compare it with previous versions of the CP search method. As an illustration and validation of the new method we also present an application to the Hyades open cluster and a comparison with independent results. Methods. The new algorithm rests on the idea of representing the stellar proper motions by great circles over the celestial sphere and visualizing their intersections as the CP of the moving group. The new strategy combines a maximum-likelihood analysis for simultaneously determining the CP and selecting the most likely group members and a minimization procedure that returns a refined CP position and its uncertainties. The method allows one to correct for internal motions within the group and takes into account that the stars in the group lie at different distances. Results. Based on Monte Carlo simulations, we find that the new CP search method in many cases returns a more precise solution than its previous versions. The new method is able to find and eliminate more field stars in the sample and is not biased towards distant stars. The CP solution for the Hyades open cluster is in excellent agreement with previous determinations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A chaotic encryption algorithm is proposed based on the "Life-like" cellular automata (CA), which acts as a pseudo-random generator (PRNG). The paper main focus is to use chaos theory to cryptography. Thus, CA was explored to look for this "chaos" property. This way, the manuscript is more concerning on tests like: Lyapunov exponent, Entropy and Hamming distance to measure the chaos in CA, as well as statistic analysis like DIEHARD and ENT suites. Our results achieved higher randomness quality than others ciphers in literature. These results reinforce the supposition of a strong relationship between chaos and the randomness quality. Thus, the "chaos" property of CA is a good reason to be employed in cryptography, furthermore, for its simplicity, low cost of implementation and respectable encryption power. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background The aim of the present study was to investigate the relationship between speed during maximum exercise test (ET) and oxygen consumption (VO2) in control and STZ-diabetic rats, in order to provide a useful method to determine exercise capacity and prescription in researches involving STZ-diabetic rats. Methods Male Wistar rats were divided into two groups: control (CG, n = 10) and diabetic (DG, n = 8). The animals were submitted to ET on treadmill with simultaneous gas analysis through open respirometry system. ET and VO2 were assessed 60 days after diabetes induction (STZ, 50 mg/Kg). Results VO2 maximum was reduced in STZ-diabetic rats (72.5 ± 1 mL/Kg/min-1) compared to CG rats (81.1 ± 1 mL/Kg/min-1). There were positive correlations between ET speed and VO2 (r = 0.87 for CG and r = 0.8 for DG), as well as between ET speed and VO2 reserve (r = 0.77 for CG and r = 0.7 for DG). Positive correlations were also obtained between measured VO2 and VO2 predicted values (r = 0.81 for CG and r = 0.75 for DG) by linear regression equations to CG (VO2 = 1.54 * ET speed + 52.34) and DG (VO2 = 1.16 * ET speed + 51.99). Moreover, we observed that 60% of ET speed corresponded to 72 and 75% of VO2 reserve for CG and DG, respectively. The maximum ET speed was also correlated with VO2 maximum for both groups (CG: r = 0.7 and DG: r = 0.7). Conclusion These results suggest that: a) VO2 and VO2 reserve can be estimated using linear regression equations obtained from correlations with ET speed for each studied group; b) exercise training can be prescribed based on ET in control and diabetic-STZ rats; c) physical capacity can be determined by ET. Therefore, ET, which involves a relatively simple methodology and low cost, can be used as an indicator of cardio-respiratory capacity in future studies that investigate the physiological effect of acute or chronic exercise in control and STZ-diabetic male rats.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study is presented an economic optimization method to design telescope irrigation laterals (multidiameter) with regular spaced outlets. The proposed analytical hydraulic solution was validated by means of a pipeline composed of three different diameters. The minimum acquisition cost of the telescope pipeline was determined by an ideal arrangement of lengths and respective diameters for each one of the three segments. The mathematical optimization method based on the Lagrange multipliers provides a strategy for finding the maximum or minimum of a function subject to certain constraints. In this case, the objective function describes the acquisition cost of pipes, and the constraints are determined from hydraulic parameters as length of irrigation laterals and total head loss permitted. The developed analytical solution provides the ideal combination of each pipe segment length and respective diameter, resulting in a decreased of the acquisition cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The clustering problem consists in finding patterns in a data set in order to divide it into clusters with high within-cluster similarity. This paper presents the study of a problem, here called MMD problem, which aims at finding a clustering with a predefined number of clusters that minimizes the largest within-cluster distance (diameter) among all clusters. There are two main objectives in this paper: to propose heuristics for the MMD and to evaluate the suitability of the best proposed heuristic results according to the real classification of some data sets. Regarding the first objective, the results obtained in the experiments indicate a good performance of the best proposed heuristic that outperformed the Complete Linkage algorithm (the most used method from the literature for this problem). Nevertheless, regarding the suitability of the results according to the real classification of the data sets, the proposed heuristic achieved better quality results than C-Means algorithm, but worse than Complete Linkage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To interpret the mean depth of cosmic ray air shower maximum and its dispersion, we parametrize those two observables as functions of the rst two moments of the lnA distribution. We examine the goodness of this simple method through simulations of test mass distributions. The application of the parameterization to Pierre Auger Observatory data allows one to study the energy dependence of the mean lnA and of its variance under the assumption of selected hadronic interaction models. We discuss possible implications of these dependences in term of interaction models and astrophysical cosmic ray sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human reactions to vibration have been extensively investigated in the past. Vibration, as well as whole-body vibration (WBV), has been commonly considered as an occupational hazard for its detrimental effects on human condition and comfort. Although long term exposure to vibrations may produce undesirable side-effects, a great part of the literature is dedicated to the positive effects of WBV when used as method for muscular stimulation and as an exercise intervention. Whole body vibration training (WBVT) aims to mechanically activate muscles by eliciting neuromuscular activity (muscle reflexes) via the use of vibrations delivered to the whole body. The most mentioned mechanism to explain the neuromuscular outcomes of vibration is the elicited neuromuscular activation. Local tendon vibrations induce activity of the muscle spindle Ia fibers, mediated by monosynaptic and polysynaptic pathways: a reflex muscle contraction known as the Tonic Vibration Reflex (TVR) arises in response to such vibratory stimulus. In WBVT mechanical vibrations, in a range from 10 to 80 Hz and peak to peak displacements from 1 to 10 mm, are usually transmitted to the patient body by the use of oscillating platforms. Vibrations are then transferred from the platform to a specific muscle group through the subject body. To customize WBV treatments, surface electromyography (SEMG) signals are often used to reveal the best stimulation frequency for each subject. Use of SEMG concise parameters, such as root mean square values of the recordings, is also a common practice; frequently a preliminary session can take place in order to discover the more appropriate stimulation frequency. Soft tissues act as wobbling masses vibrating in a damped manner in response to mechanical excitation; Muscle Tuning hypothesis suggest that neuromuscular system works to damp the soft tissue oscillation that occurs in response to vibrations; muscles alters their activity to dampen the vibrations, preventing any resonance phenomenon. Muscle response to vibration is however a complex phenomenon as it depends on different parameters, like muscle-tension, muscle or segment-stiffness, amplitude and frequency of the mechanical vibration. Additionally, while in the TVR study the applied vibratory stimulus and the muscle conditions are completely characterised (a known vibration source is applied directly to a stretched/shortened muscle or tendon), in WBV study only the stimulus applied to a distal part of the body is known. Moreover, mechanical response changes in relation to the posture. The transmissibility of vibratory stimulus along the body segment strongly depends on the position held by the subject. The aim of this work was the investigation on the effects that the use of vibrations, in particular the effects of whole body vibrations, may have on muscular activity. A new approach to discover the more appropriate stimulus frequency, by the use of accelerometers, was also explored. Different subjects, not affected by any known neurological or musculoskeletal disorders, were voluntarily involved in the study and gave their informed, written consent to participate. The device used to deliver vibration to the subjects was a vibrating platform. Vibrations impressed by the platform were exclusively vertical; platform displacement was sinusoidal with an intensity (peak-to-peak displacement) set to 1.2 mm and with a frequency ranging from 10 to 80 Hz. All the subjects familiarized with the device and the proper positioning. Two different posture were explored in this study: position 1 - hack squat; position 2 - subject standing on toes with heels raised. SEMG signals from the Rectus Femoris (RF), Vastus Lateralis (VL) and Vastus medialis (VM) were recorded. SEMG signals were amplified using a multi-channel, isolated biomedical signal amplifier The gain was set to 1000 V/V and a band pass filter (-3dB frequency 10 - 500 Hz) was applied; no notch filters were used to suppress line interference. Tiny and lightweight (less than 10 g) three-axial MEMS accelerometers (Freescale semiconductors) were used to measure accelerations of onto patient’s skin, at EMG electrodes level. Accelerations signals provided information related to individuals’ RF, Biceps Femoris (BF) and Gastrocnemius Lateralis (GL) muscle belly oscillation; they were pre-processed in order to exclude influence of gravity. As demonstrated by our results, vibrations generate peculiar, not negligible motion artifact on skin electrodes. Artifact amplitude is generally unpredictable; it appeared in all the quadriceps muscles analysed, but in different amounts. Artifact harmonics extend throughout the EMG spectrum, making classic high-pass filters ineffective; however, their contribution was easy to filter out from the raw EMG signal with a series of sharp notch filters centred at the vibration frequency and its superior harmonics (1.5 Hz wide). However, use of these simple filters prevents the revelation of EMG power potential variation in the mentioned filtered bands. Moreover our experience suggests that the possibility of reducing motion artefact, by using particular electrodes and by accurately preparing the subject’s skin, is not easily viable; even though some small improvements were obtained, it was not possible to substantially decrease the artifact. Anyway, getting rid of those artifacts lead to some true EMG signal loss. Nevertheless, our preliminary results suggest that the use of notch filters at vibration frequency and its harmonics is suitable for motion artifacts filtering. In RF SEMG recordings during vibratory stimulation only a little EMG power increment should be contained in the mentioned filtered bands due to synchronous electromyographic activity of the muscle. Moreover, it is better to remove the artifact that, in our experience, was found to be more than 40% of the total signal power. In summary, many variables have to be taken into account: in addition to amplitude, frequency and duration of vibration treatment, other fundamental variables were found to be subject anatomy, individual physiological condition and subject’s positioning on the platform. Studies on WBV treatments that include surface EMG analysis to asses muscular activity during vibratory stimulation should take into account the presence of motion artifacts. Appropriate filtering of artifacts, to reveal the actual effect on muscle contraction elicited by vibration stimulus, is mandatory. However as a result of our preliminary study, a simple multi-band notch filtering may help to reduce randomness of the results. Muscle tuning hypothesis seemed to be confirmed. Our results suggested that the effects of WBV are linked to the actual muscle motion (displacement). The greater was the muscle belly displacement the higher was found the muscle activity. The maximum muscle activity has been found in correspondence with the local mechanical resonance, suggesting a more effective stimulation at the specific system resonance frequency. Holding the hypothesis that muscle activation is proportional to muscle displacement, treatment optimization could be obtained by simply monitoring local acceleration (resonance). However, our study revealed some short term effects of vibratory stimulus; prolonged studies should be assembled in order to consider the long term effectiveness of these results. Since local stimulus depends on the kinematic chain involved, WBV muscle stimulation has to take into account the transmissibility of the stimulus along the body segment in order to ensure that vibratory stimulation effectively reaches the target muscle. Combination of local resonance and muscle response should also be further investigated to prevent hazards to individuals undergoing WBV treatments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geometric nonlinearities of flexure hinges introduced by large deflections often complicate the analysis of compliant mechanisms containing such members, and therefore, Pseudo-Rigid-Body Models (PRBMs) have been well proposed and developed by Howell [1994] to analyze the characteristics of slender beams under large deflection. These models, however, fail to approximate the characteristics for the deep beams (short beams) or the other flexure hinges. Lobontiu's work [2001] contributed to the diverse flexure hinge analysis building on the assumptions of small deflection, which also limits the application range of these flexure hinges and cannot analyze the stiffness and stress characteristics of these flexure hinges for large deflection. Therefore, the objective of this thesis is to analyze flexure hinges considering both the effects of large-deflection and shear force, which guides the design of flexure-based compliant mechanisms. The main work conducted in the thesis is outlined as follows. 1. Three popular types of flexure hinges: (circular flexure hinges, elliptical flexure hinges and corner-filleted flexure hinges) are chosen for analysis at first. 2. Commercial software (Comsol) based Finite Element Analysis (FEA) method is then used for correcting the errors produced by the equations proposed by Lobontiu when the chosen flexure hinges suffer from large deformation. 3. Three sets of generic design equations for the three types of flexure hinges are further proposed on the basis of stiffness and stress characteristics from the FEA results. 4. A flexure-based four-bar compliant mechanism is finally studied and modeled using the proposed generic design equations. The load-displacement relationships are verified by a numerical example. The results show that a maximum error about the relationship between moment and rotation deformation is less than 3.4% for a flexure hinge, and it is lower than 5% for the four-bar compliant mechanism compared with the FEA results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Satellite image classification involves designing and developing efficient image classifiers. With satellite image data and image analysis methods multiplying rapidly, selecting the right mix of data sources and data analysis approaches has become critical to the generation of quality land-use maps. In this study, a new postprocessing information fusion algorithm for the extraction and representation of land-use information based on high-resolution satellite imagery is presented. This approach can produce land-use maps with sharp interregional boundaries and homogeneous regions. The proposed approach is conducted in five steps. First, a GIS layer - ATKIS data - was used to generate two coarse homogeneous regions, i.e. urban and rural areas. Second, a thematic (class) map was generated by use of a hybrid spectral classifier combining Gaussian Maximum Likelihood algorithm (GML) and ISODATA classifier. Third, a probabilistic relaxation algorithm was performed on the thematic map, resulting in a smoothed thematic map. Fourth, edge detection and edge thinning techniques were used to generate a contour map with pixel-width interclass boundaries. Fifth, the contour map was superimposed on the thematic map by use of a region-growing algorithm with the contour map and the smoothed thematic map as two constraints. For the operation of the proposed method, a software package is developed using programming language C. This software package comprises the GML algorithm, a probabilistic relaxation algorithm, TBL edge detector, an edge thresholding algorithm, a fast parallel thinning algorithm, and a region-growing information fusion algorithm. The county of Landau of the State Rheinland-Pfalz, Germany was selected as a test site. The high-resolution IRS-1C imagery was used as the principal input data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The maximum principle is an important property of solutions to PDE. Correspondingly, it's of great interest for people to design a high order numerical scheme solving PDE with this property maintained. In this thesis, our particular interest is solving convection-dominated diffusion equation. We first review a nonconventional maximum principle preserving(MPP) high order finite volume(FV) WENO scheme, and then propose a new parametrized MPP high order finite difference(FD) WENO framework, which is generalized from the one solving hyperbolic conservation laws. A formal analysis is presented to show that a third order finite difference scheme with this parametrized MPP flux limiters maintains the third order accuracy without extra CFL constraint when the low order monotone flux is chosen appropriately. Numerical tests in both one and two dimensional cases are performed on the simulation of the incompressible Navier-Stokes equations in vorticity stream-function formulation and several other problems to show the effectiveness of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gauging the maximum willingness to pay (WTP) of a product accurately is a critical success factor that determines not only market performance but also financial results. A number of approaches have therefore been developed to accurately estimate consumers’ willingness to pay. Here, four commonly used measurement approaches are compared using real purchase data as a benchmark. The relative strengths of each method are analyzed on the basis of statistical criteria and, more importantly, on their potential to predict managerially relevant criteria such as optimal price, quantity and profit. The results show a slight advantage of incentive-aligned approaches though the market settings need to be considered to choose the best-fitting procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The motion of lung tumors during respiration makes the accurate delivery of radiation therapy to the thorax difficult because it increases the uncertainty of target position. The adoption of four-dimensional computed tomography (4D-CT) has allowed us to determine how a tumor moves with respiration for each individual patient. Using information acquired during a 4D-CT scan, we can define the target, visualize motion, and calculate dose during the planning phase of the radiotherapy process. One image data set that can be created from the 4D-CT acquisition is the maximum-intensity projection (MIP). The MIP can be used as a starting point to define the volume that encompasses the motion envelope of the moving gross target volume (GTV). Because of the close relationship that exists between the MIP and the final target volume, we investigated four MIP data sets created with different methodologies (3 using various 4D-CT sorting implementations, and one using all available cine CT images) to compare target delineation. It has been observed that changing the 4D-CT sorting method will lead to the selection of a different collection of images; however, the clinical implications of changing the constituent images on the resultant MIP data set are not clear. There has not been a comprehensive study that compares target delineation based on different 4D-CT sorting methodologies in a patient population. We selected a collection of patients who had previously undergone thoracic 4D-CT scans at our institution, and who had lung tumors that moved at least 1 cm. We then generated the four MIP data sets and automatically contoured the target volumes. In doing so, we identified cases in which the MIP generated from a 4D-CT sorting process under-represented the motion envelope of the target volume by more than 10% than when measured on the MIP generated from all of the cine CT images. The 4D-CT methods suffered from duplicate image selection and might not choose maximum extent images. Based on our results, we suggest utilization of a MIP generated from the full cine CT data set to ensure a representative inclusive tumor extent, and to avoid geometric miss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The reconstruction of the stable carbon isotope evolution in atmospheric CO2 (δ13Catm), as archived in Antarctic ice cores, bears the potential to disentangle the contributions of the different carbon cycle fluxes causing past CO2 variations. Here we present a new record of δ13Catm before, during and after the Marine Isotope Stage 5.5 (155 000 to 105 000 yr BP). The dataset is archived on the data repository PANGEA® (www.pangea.de) under 10.1594/PANGAEA.817041. The record was derived with a well established sublimation method using ice from the EPICA Dome C (EDC) and the Talos Dome ice cores in East Antarctica. We find a 0.4‰ shift to heavier values between the mean δ13Catm level in the Penultimate (~ 140 000 yr BP) and Last Glacial Maximum (~ 22 000 yr BP), which can be explained by either (i) changes in the isotopic composition or (ii) intensity of the carbon input fluxes to the combined ocean/atmosphere carbon reservoir or (iii) by long-term peat buildup. Our isotopic data suggest that the carbon cycle evolution along Termination II and the subsequent interglacial was controlled by essentially the same processes as during the last 24 000 yr, but with different phasing and magnitudes. Furthermore, a 5000 yr lag in the CO2 decline relative to EDC temperatures is confirmed during the glacial inception at the end of MIS5.5 (120 000 yr BP). Based on our isotopic data this lag can be explained by terrestrial carbon release and carbonate compensation.