875 resultados para Evaluation methods
Resumo:
A low-order harmonic pulsating torque is a major concern in high-power drives, high-speed drives, and motor drives operating in an overmodulation region. This paper attempts to minimize the low-order harmonic torques in induction motor drives, operated at a low pulse number (i.e., a low ratio of switching frequency to fundamental frequency), through a frequency domain (FD) approach as well as a synchronous reference frame (SRF) based approach. This paper first investigates FD-based approximate elimination of harmonic torque as suggested by classical works. This is then extended into a procedure for minimization of low-order pulsating torque components in the FD, which is independent of machine parameters and mechanical load. Furthermore, an SRF-based optimal pulse width modulation (PWM) method is proposed to minimize the low-order harmonic torques, considering the motor parameters and load torque. The two optimal methods are evaluated and compared with sine-triangle (ST) PWM and selective harmonic elimination (SHE) PWM through simulations and experimental studies on a 3.7-kW induction motor drive. The SRF-based optimal PWM results in marginally better performance than the FD-based one. However, the selection of optimal switching angle for any modulation index (M) takes much longer in case of SRF than in case of the FD-based approach. The FD-based optimal solutions can be used as good starting solutions and/or to reasonably restrict the search space for optimal solutions in the SRF-based approach. Both of the FD-based and SRF-based optimal PWM methods reduce the low-order pulsating torque significantly, compared to ST PWM and SHE PWM, as shown by the simulation and experimental results.
Resumo:
DNA microarrays provide such a huge amount of data that unsupervised methods are required to reduce the dimension of the data set and to extract meaningful biological information. This work shows that Independent Component Analysis (ICA) is a promising approach for the analysis of genome-wide transcriptomic data. The paper first presents an overview of the most popular algorithms to perform ICA. These algorithms are then applied on a microarray breast-cancer data set. Some issues about the application of ICA and the evaluation of biological relevance of the results are discussed. This study indicates that ICA significantly outperforms Principal Component Analysis (PCA).
Resumo:
Background -- N-(4-hydroxyphenyl)retinamide (4-HPR, fenretinide) is a synthetic retinoid with potent pro-apoptotic activity against several types of cancer, but little is known regarding mechanisms leading to chemoresistance. Ceramide and, more recently, other sphingolipid species (e.g., dihydroceramide and dihydrosphingosine) have been implicated in 4-HPR-mediated tumor cell death. Because sphingolipid metabolism has been reported to be altered in drug-resistant tumor cells, we studied the implication of sphingolipids in acquired resistance to 4-HPR based on an acute lymphoblastic leukemia model. Methods -- CCRF-CEM cell lines resistant to 4-HPR were obtained by gradual selection. Endogenous sphingolipid profiles and in situ enzymatic activities were determined by LC/MS, and resistance to 4-HPR or to alternative treatments was measured using the XTT viability assay and annexin V-FITC/propidium iodide labeling. Results -- No major crossresistance was observed against other antitumoral compounds (i.e. paclitaxel, cisplatin, doxorubicin hydrochloride) or agents (i.e. ultra violet C, hydrogen peroxide) also described as sphingolipid modulators. CCRF-CEM cell lines resistant to 4-HPR exhibited a distinctive endogenous sphingolipid profile that correlated with inhibition of dihydroceramide desaturase. Cells maintained acquired resistance to 4-HPR after the removal of 4-HPR though the sphingolipid profile returned to control levels. On the other hand, combined treatment with sphingosine kinase inhibitors (unnatural (dihydro)sphingosines ((dh)Sph)) and glucosylceramide synthase inhibitor (PPMP) in the presence or absence of 4-HPR increased cellular (dh)Sph (but not ceramide) levels and were highly toxic for both parental and resistant cells. Conclusions -- In the leukemia model, acquired resistance to 4-HPR is selective and persists in the absence of sphingolipid profile alteration. Therapeutically, the data demonstrate that alternative sphingolipid-modulating antitumoral strategies are suitable for both 4-HPR-resistant and sensitive leukemia cells. Thus, whereas sphingolipids may not be critical for maintaining resistance to 4-HPR, manipulation of cytotoxic sphingolipids should be considered a viable approach for overcoming resistance.
Resumo:
In this paper, common criterions about residual strength evaluation at home and abroad are generalized and seven methods are acquired, namely ASME-B31G, DM, Wes-2805-97, CVDA-84, Burdekin, Irwin and J integral methods. BP neural network are Combined with Genetic Algorithm (GA) named by modified BP-GA methods to successfully predict residual strength and critical pressure of injecting water, corrosion pipelines. Examples are shown that calculation results of every kind of method have great difference and calculating values of Wes-2805-97 criterion, ASME-B31G criterion, CVDA-84 criterion and Irwin fracture mechanics model are conservative and higher than, those of J integral methods while calculating values of Burdiken model and DM fracture mechanics model are dangerous and less than those of J integral methods and calculating values of modified BP-GA methods are close and moderate to those of J integral methods. Therefore modified BP-GA methods and J integral methods are considered better methods to calculate residual strength and critical pressure of injecting water corrosion pipelines
Resumo:
227 págs.
Resumo:
8 p.
Resumo:
ENGLISH: A two-stage sampling design is used to estimate the variances of the numbers of yellowfin in different age groups caught in the eastern Pacific Ocean. For purse seiners, the primary sampling unit (n) is a brine well containing fish from a month-area stratum; the number of fish lengths (m) measured from each well are the secondary units. The fish cannot be selected at random from the wells because of practical limitations. The effects of different sampling methods and other factors on the reliability and precision of statistics derived from the length-frequency data were therefore examined. Modifications are recommended where necessary. Lengths of fish measured during the unloading of six test wells revealed two forms of inherent size stratification: 1) short-term disruptions of existing pattern of sizes, and 2) transition zones between long-term trends in sizes. To some degree, all wells exhibited cyclic changes in mean size and variance during unloading. In half of the wells, it was observed that size selection by the unloaders induced a change in mean size. As a result of stratification, the sequence of sizes removed from all wells was non-random, regardless of whether a well contained fish from a single set or from more than one set. The number of modal sizes in a well was not related to the number of sets. In an additional well composed of fish from several sets, an experiment on vertical mixing indicated that a representative sample of the contents may be restricted to the bottom half of the well. The contents of the test wells were used to generate 25 simulated wells and to compare the results of three sampling methods applied to them. The methods were: (1) random sampling (also used as a standard), (2) protracted sampling, in which the selection process was extended over a large portion of a well, and (3) measuring fish consecutively during removal from the well. Repeated sampling by each method and different combinations indicated that, because the principal source of size variation occurred among primary units, increasing n was the most effective way to reduce the variance estimates of both the age-group sizes and the total number of fish in the landings. Protracted sampling largely circumvented the effects of size stratification, and its performance was essentially comparable to that of random sampling. Sampling by this method is recommended. Consecutive-fish sampling produced more biased estimates with greater variances. Analysis of the 1988 length-frequency samples indicated that, for age groups that appear most frequently in the catch, a minimum sampling frequency of one primary unit in six for each month-area stratum would reduce the coefficients of variation (CV) of their size estimates to approximately 10 percent or less. Additional stratification of samples by set type, rather than month-area alone, further reduced the CV's of scarce age groups, such as the recruits, and potentially improved their accuracy. The CV's of recruitment estimates for completely-fished cohorts during the 198184 period were in the vicinity of 3 to 8 percent. Recruitment estimates and their variances were also relatively insensitive to changes in the individual quarterly catches and variances, respectively, of which they were composed. SPANISH: Se usa un diseño de muestreo de dos etapas para estimar las varianzas de los números de aletas amari11as en distintos grupos de edad capturados en el Océano Pacifico oriental. Para barcos cerqueros, la unidad primaria de muestreo (n) es una bodega de salmuera que contenía peces de un estrato de mes-área; el numero de ta11as de peces (m) medidas de cada bodega es la unidad secundaria. Limitaciones de carácter practico impiden la selección aleatoria de peces de las bodegas. Por 10 tanto, fueron examinados los efectos de distintos métodos de muestreo y otros factores sobre la confiabilidad y precisión de las estadísticas derivadas de los datos de frecuencia de ta11a. Se recomiendan modificaciones donde sean necesarias. Las ta11as de peces medidas durante la descarga de seis bodegas de prueba revelaron dos formas de estratificación inherente por ta11a: 1) perturbaciones a corto plazo en la pauta de ta11as existente, y 2) zonas de transición entre las tendencias a largo plazo en las ta11as. En cierto grado, todas las bodegas mostraron cambios cíclicos en ta11a media y varianza durante la descarga. En la mitad de las bodegas, se observo que selección por ta11a por los descargadores indujo un cambio en la ta11a media. Como resultado de la estratificación, la secuencia de ta11as sacadas de todas las bodegas no fue aleatoria, sin considerar si una bodega contenía peces de un solo lance 0 de mas de uno. El numero de ta11as modales en una bodega no estaba relacionado al numero de lances. En una bodega adicional compuesta de peces de varios lances, un experimento de mezcla vertical indico que una muestra representativa del contenido podría estar limitada a la mitad inferior de la bodega. Se uso el contenido de las bodegas de prueba para generar 25 bodegas simuladas y comparar los resultados de tres métodos de muestreo aplicados a estas. Los métodos fueron: (1) muestreo aleatorio (usado también como norma), (2) muestreo extendido, en el cual el proceso de selección fue extendido sobre una porción grande de una bodega, y (3) medición consecutiva de peces durante la descarga de la bodega. EI muestreo repetido con cada método y distintas combinaciones de n y m indico que, puesto que la fuente principal de variación de ta11a ocurría entre las unidades primarias, aumentar n fue la manera mas eficaz de reducir las estimaciones de la varianza de las ta11as de los grupos de edad y el numero total de peces en los desembarcos. El muestreo extendido evito mayormente los efectos de la estratificación por ta11a, y su desempeño fue esencialmente comparable a aquel del muestreo aleatorio. Se recomienda muestrear con este método. El muestreo de peces consecutivos produjo estimaciones mas sesgadas con mayores varianzas. Un análisis de las muestras de frecuencia de ta11a de 1988 indico que, para los grupos de edad que aparecen con mayor frecuencia en la captura, una frecuencia de muestreo minima de una unidad primaria de cada seis para cada estrato de mes-área reduciría los coeficientes de variación (CV) de las estimaciones de ta11a correspondientes a aproximadamente 10% 0 menos. Una estratificación adicional de las muestras por tipo de lance, y no solamente mes-área, redujo aun mas los CV de los grupos de edad escasos, tales como los reclutas, y mejoró potencialmente su precisión. Los CV de las estimaciones del reclutamiento para las cohortes completamente pescadas durante 1981-1984 fueron alrededor de 3-8%. Las estimaciones del reclutamiento y sus varianzas fueron también relativamente insensibles a cambios en las capturas de trimestres individuales y las varianzas, respectivamente, de las cuales fueron derivadas. (PDF contains 70 pages)
Resumo:
Artisanal fisheries resources and its exploitation trends in Cross River State were evaluated using questionnaire and participatory Rapid Appraisal (PRA) methods for a period of 5 years (1991-1995). They were open-access or unrestricted access fishing in Cross River State within the period under view. It was also discovered that, they were no proper records from the Department of fisheries. There was a decline in the relative abundance of the stock that constituted the important marine and freshwater fisheries in Cross River State Artisanal fisheries (small-scale fisheries) yielded 146,076 metric tons within the period. It was suggested therefore, that government should enforce the policies that restrict open-access in marine artisanal fisheries in Cross River State
Resumo:
The level and distribution of some heavy metals viz Cadmium, Lead, Copper Zinc, and Cobalt in five commercially important fishes, water and sediments at three different locations in Kainj Lake were determined using standard methods. The results show that the ranges of heavy metals mu g/g in fishes in Dam site Laotian are: Cd (0.05~c0.01-20~c01), (Pb(ND-1.12 plus or minus )1), Cu (0.81~c25-2.93~c06), Zn (20.89 arrow right .15-36.78~c2.97), Co(0.08~c01-0.27~c02); in cover Dam, the ranges are Cd (0.04~c02-0.16~c0.2), Pb (nd-02~c01), Cu(0.75~c05-2.61~c13), Zn(15.70~c1.55-32.23~c2.70), Co(0.04~c02-0.25~c0.01) and in Yuna they are Cd (0.05~c01-0.14~c02), Pb (nd-0.32~c01), Cu (0.23~c07-2.70~c05), Zn(15.50 plus or minus `.35-25.62~c2.47), Co(0.07~c02-23~c0.01). The metals concentration (mg/l) in the water sample from Dam site, cover dam and Yuna respectively are Cd(0.007~c001,. 004~c001 and 0.005~001), Pb(013~c001, ND and ND), Cu(.055~c008.030~c007, 05 plus or minus .010), Zn(0.13~c01, 0.060 plus or minus .0055) and Co (.026 plus or minus .022 plus or minus .004, .024 plus or minus .004), while the metals concentration ( mu g/g) in sediments sample from Dam site, cover dam and Yuna are respectively Cd(.05 plus or minus .01, .02 plus or minus .01), Pb(16.00~c1.00, ND and 9.33~c1.01), Cu(24.00~c1.34, 4.26 plus or minus .91 and 11.08~c1.32), Zn(42.00~c1.00, 35~c10 and 38.00 plus or minus .45), Co(15.00~c1.17, 8.69~c1.21 and 10.91~c44). The concentrations of the tested heavy metals are within the acceptable standards of WHO (1987a)
Resumo:
Jet noise reduction is an important goal within both commercial and military aviation. Although large-scale numerical simulations are now able to simultaneously compute turbulent jets and their radiated sound, lost-cost, physically-motivated models are needed to guide noise-reduction efforts. A particularly promising modeling approach centers around certain large-scale coherent structures, called wavepackets, that are observed in jets and their radiated sound. The typical approach to modeling wavepackets is to approximate them as linear modal solutions of the Euler or Navier-Stokes equations linearized about the long-time mean of the turbulent flow field. The near-field wavepackets obtained from these models show compelling agreement with those educed from experimental and simulation data for both subsonic and supersonic jets, but the acoustic radiation is severely under-predicted in the subsonic case. This thesis contributes to two aspects of these models. First, two new solution methods are developed that can be used to efficiently compute wavepackets and their acoustic radiation, reducing the computational cost of the model by more than an order of magnitude. The new techniques are spatial integration methods and constitute a well-posed, convergent alternative to the frequently used parabolized stability equations. Using concepts related to well-posed boundary conditions, the methods are formulated for general hyperbolic equations and thus have potential applications in many fields of physics and engineering. Second, the nonlinear and stochastic forcing of wavepackets is investigated with the goal of identifying and characterizing the missing dynamics responsible for the under-prediction of acoustic radiation by linear wavepacket models for subsonic jets. Specifically, we use ensembles of large-eddy-simulation flow and force data along with two data decomposition techniques to educe the actual nonlinear forcing experienced by wavepackets in a Mach 0.9 turbulent jet. Modes with high energy are extracted using proper orthogonal decomposition, while high gain modes are identified using a novel technique called empirical resolvent-mode decomposition. In contrast to the flow and acoustic fields, the forcing field is characterized by a lack of energetic coherent structures. Furthermore, the structures that do exist are largely uncorrelated with the acoustic field. Instead, the forces that most efficiently excite an acoustic response appear to take the form of random turbulent fluctuations, implying that direct feedback from nonlinear interactions amongst wavepackets is not an essential noise source mechanism. This suggests that the essential ingredients of sound generation in high Reynolds number jets are contained within the linearized Navier-Stokes operator rather than in the nonlinear forcing terms, a conclusion that has important implications for jet noise modeling.
Resumo:
[EN]This research had as primary objective to model different types of problems using linear programming and apply different methods so as to find an adequate solution to them. To achieve this objective, a linear programming problem and its dual were studied and compared. For that, linear programming techniques were provided and an introduction of the duality theory was given, analyzing the dual problem and the duality theorems. Then, a general economic interpretation was given and different optimal dual variables like shadow prices were studied through the next practical case: An aesthetic surgery hospital wanted to organize its monthly waiting list of four types of surgeries to maximize its daily income. To solve this practical case, we modelled the linear programming problem following the relationships between the primal problem and its dual. Additionally, we solved the dual problem graphically, and then we found the optimal solution of the practical case posed through its dual, following the different theorems of the duality theory. Moreover, how Complementary Slackness can help to solve linear programming problems was studied. To facilitate the solution Solver application of Excel and Win QSB programme were used.
Resumo:
Background: Fentanyl is widely used off-label in NICU. Our aim was to investigate its cerebral, cardiovascular and pulmonary effects as well as pharmacokinetics in an experimental model for neonates. Methods: Fentanyl (5 mu g/kg bolus immediately followed by a 90 minute infusion of 3 mu g/kg/h) was administered to six mechanically ventilated newborn piglets. Cardiovascular, ventilation, pulmonary and oxygenation indexes as well as brain activity were monitored from T = 0 up to the end of experiments (T = 225-300 min). Also plasma samples for quantification of fentanyl were drawn. Results: A "reliable degree of sedation" was observed up to T = 210-240 min, consistent with the selected dosing regimen and the observed fentanyl plasma levels. Unlike cardiovascular parameters, which were unmodified except for an increasing trend in heart rate, some of the ventilation and oxygenation indexes as well as brain activity were significantly altered. The pulmonary and brain effects of fentanyl were mostly recovered from T = 210 min to the end of experiment. Conclusion: The newborn piglet was shown to be a suitable experimental model for studying fentanyl disposition as well as respiratory and cardiovascular effects in human neonates. Therefore, it could be extremely useful for further investigating the drug behaviour under pathophysiological conditions.
Resumo:
In the problem of one-class classification (OCC) one of the classes, the target class, has to be distinguished from all other possible objects, considered as nontargets. In many biomedical problems this situation arises, for example, in diagnosis, image based tumor recognition or analysis of electrocardiogram data. In this paper an approach to OCC based on a typicality test is experimentally compared with reference state-of-the-art OCC techniques-Gaussian, mixture of Gaussians, naive Parzen, Parzen, and support vector data description-using biomedical data sets. We evaluate the ability of the procedures using twelve experimental data sets with not necessarily continuous data. As there are few benchmark data sets for one-class classification, all data sets considered in the evaluation have multiple classes. Each class in turn is considered as the target class and the units in the other classes are considered as new units to be classified. The results of the comparison show the good performance of the typicality approach, which is available for high dimensional data; it is worth mentioning that it can be used for any kind of data (continuous, discrete, or nominal), whereas state-of-the-art approaches application is not straightforward when nominal variables are present.
Resumo:
World Conference on Psychology and Sociology 2012
Resumo:
The potential of the 18S rRNA V9 metabarcoding approach for diet assessment was explored using MiSeq paired-end (PE; 2 9 150 bp) technology. To critically evaluate the method's performance with degraded/digested DNA, the diets of two zooplanktivorous fish species from the Bay of Biscay, European sardine (Sardina pilchardus) and European sprat (Sprattus sprattus), were analysed. The taxonomic resolution and quantitative potential of the 18S V9 metabarcoding was first assessed both in silico and with mock and field plankton samples. Our method was capable of discriminating species within the reference database in a reliable way providing there was at least one variable position in the 18S V9 region. Furthermore, it successfully discriminated diet between both fish species, including habitat and diel differences among sardines, overcoming some of the limitations of traditional visual-based diet analysis methods. The high sensitivity and semi-quantitative nature of the 18S V9 metabarcoding approach was supported by both visual microscopy and qPCR-based results. This molecular approach provides an alternative cost and time effective tool for food-web analysis.