892 resultados para Matrix Power Function


Relevância:

80.00% 80.00%

Publicador:

Resumo:

de Araujo CC, Silva JD, Samary CS, Guimaraes IH, Marques PS, Oliveira GP, do Carmo LGRR, Goldenberg RC, Bakker-Abreu I, Diaz BL, Rocha NN, Capelozzi VL, Pelosi P, Rocco PRM. Regular and moderate exercise before experimental sepsis reduces the risk of lung and distal organ injury. J Appl Physiol 112: 1206-1214, 2012. First published January 19, 2012; doi:10.1152/japplphysiol.01061.2011.-Physical activity modulates inflammation and immune response in both normal and pathologic conditions. We investigated whether regular and moderate exercise before the induction of experimental sepsis reduces the risk of lung and distal organ injury and survival. One hundred twenty-four BALB/c mice were randomly assigned to two groups: sedentary (S) and trained (T). Animals in T group ran on a motorized treadmill, at moderate intensity, 5% grade, 30 min/day, 3 times a week for 8 wk. Cardiac adaptation to exercise was evaluated using echocardiography. Systolic volume and left ventricular mass were increased in T compared with S group. Both T and S groups were further randomized either to sepsis induced by cecal ligation and puncture surgery (CLP) or sham operation (control). After 24 h, lung mechanics and histology, the degree of cell apoptosis in lung, heart, kidney, liver, and small intestine villi, and interleukin (IL)-6, KC (IL-8 murine functional homolog), IL-1 beta, IL-10, and number of cells in bronchoalveolar lavage (BALF) and peritoneal lavage (PLF) fluids as well as plasma were measured. In CLP, T compared with S groups showed: 1) improvement in survival; 2) reduced lung static elastance, alveolar collapse, collagen and elastic fiber content, number of neutrophils in BALF, PLF, and plasma, as well as lung and distal organ cell apoptosis; and 3) increased IL-10 in BALF and plasma, with reduced IL-6, KC, and IL-1 beta in PLF. In conclusion, regular and moderate exercise before the induction of sepsis reduced the risk of lung and distal organ damage, thus increasing survival.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[EN]The age and growth of the sand sole Pegusa lascaris from the Canarian Archipelago were studied from 2107 fish collected between January 2005 and December 2007. To find an appropriate method for age determination, sagittal otoliths were observed by surface-reading and frontal section and the results were compared. The two methods did not differ significantly in estimated age but the surface-reading method is superior in terms of cost and time efficiency. The sand sole has a moderate life span, with ages up to 10 years recorded. Individuals grow quickly in their first two years, attaining approximately 48% of their maximum standard length; after the second year, their growth rate drops rapidly as energy is diverted to reproduction. Males and females show dimorphism in growth, with females reaching a slightly greater length and age than males. Von Bertalanffy, seasonalized von Bertalanfy, Gompertz, and Schnute growth models were fitted to length-at-age data. Akaike weights for the seasonalized von Bertalanffy growth model indicated that the probability of choosing the correct model from the group of models used was >0.999 for males and females. The seasonalized von Bertalanffy growth parameters estimated were: L? = 309 mm standard length, k = 0.166 yr?1, t0 = ?1.88 yr, C = 0.347, and ts = 0.578 for males; and L? = 318 mm standard length, k = 0.164 yr?1, t0 = ?1.653 yr, C = 0.820, and ts = 0.691 for females. Fish standard length and otolith radius are closely correlated (R2 = 0.902). The relation between standard length and otolith radius is described by a power function (a = 85.11, v = 0.906)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Trabajo realizado por: Packard, T. T., Osma, N., Fernández Urruzola, I., Gómez, M

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die Verifikation bewertet die Güte von quantitativen Niederschlagsvorhersagen(QNV) gegenüber Beobachtungen und liefert Hinweise auf systematische Modellfehler. Mit Hilfe der merkmals-bezogenen Technik SAL werden simulierte Niederschlagsverteilungen hinsichtlich (S)truktur, (A)mplitude und (L)ocation analysiert. Seit einigen Jahren werden numerische Wettervorhersagemodelle benutzt, mit Gitterpunktabständen, die es erlauben, hochreichende Konvektion ohne Parametrisierung zu simulieren. Es stellt sich jetzt die Frage, ob diese Modelle bessere Vorhersagen liefern. Der hoch aufgelöste stündliche Beobachtungsdatensatz, der in dieser Arbeit verwendet wird, ist eine Kombination von Radar- und Stationsmessungen. Zum einem wird damit am Beispiel der deutschen COSMO-Modelle gezeigt, dass die Modelle der neuesten Generation eine bessere Simulation des mittleren Tagesgangs aufweisen, wenn auch mit zu geringen Maximum und etwas zu spätem Auftreten. Im Gegensatz dazu liefern die Modelle der alten Generation ein zu starkes Maximum, welches erheblich zu früh auftritt. Zum anderen wird mit dem neuartigen Modell eine bessere Simulation der räumlichen Verteilung des Niederschlags, durch eine deutliche Minimierung der Luv-/Lee Proble-matik, erreicht. Um diese subjektiven Bewertungen zu quantifizieren, wurden tägliche QNVs von vier Modellen für Deutschland in einem Achtjahreszeitraum durch SAL sowie klassischen Maßen untersucht. Die höher aufgelösten Modelle simulieren realistischere Niederschlagsverteilungen(besser in S), aber bei den anderen Komponenten tritt kaum ein Unterschied auf. Ein weiterer Aspekt ist, dass das Modell mit der gröbsten Auf-lösung(ECMWF) durch den RMSE deutlich am besten bewertet wird. Darin zeigt sich das Problem des ‚Double Penalty’. Die Zusammenfassung der drei Komponenten von SAL liefert das Resultat, dass vor allem im Sommer das am feinsten aufgelöste Modell (COSMO-DE) am besten abschneidet. Hauptsächlich kommt das durch eine realistischere Struktur zustande, so dass SAL hilfreiche Informationen liefert und die subjektive Bewertung bestätigt. rnIm Jahr 2007 fanden die Projekte COPS und MAP D-PHASE statt und boten die Möglich-keit, 19 Modelle aus drei Modellkategorien hinsichtlich ihrer Vorhersageleistung in Südwestdeutschland für Akkumulationszeiträume von 6 und 12 Stunden miteinander zu vergleichen. Als Ergebnisse besonders hervorzuheben sind, dass (i) je kleiner der Gitter-punktabstand der Modelle ist, desto realistischer sind die simulierten Niederschlags-verteilungen; (ii) bei der Niederschlagsmenge wird in den hoch aufgelösten Modellen weniger Niederschlag, d.h. meist zu wenig, simuliert und (iii) die Ortskomponente wird von allen Modellen am schlechtesten simuliert. Die Analyse der Vorhersageleistung dieser Modelltypen für konvektive Situationen zeigt deutliche Unterschiede. Bei Hochdrucklagen sind die Modelle ohne Konvektionsparametrisierung nicht in der Lage diese zu simulieren, wohingegen die Modelle mit Konvektionsparametrisierung die richtige Menge, aber zu flächige Strukturen realisieren. Für konvektive Ereignisse im Zusammenhang mit Fronten sind beide Modelltypen in der Lage die Niederschlagsverteilung zu simulieren, wobei die hoch aufgelösten Modelle realistischere Felder liefern. Diese wetterlagenbezogene Unter-suchung wird noch systematischer unter Verwendung der konvektiven Zeitskala durchge-führt. Eine erstmalig für Deutschland erstellte Klimatologie zeigt einen einer Potenzfunktion folgenden Abfall der Häufigkeit dieser Zeitskala zu größeren Werten hin auf. Die SAL Ergebnisse sind für beide Bereiche dramatisch unterschiedlich. Für kleine Werte der konvektiven Zeitskala sind sie gut, dagegen werden bei großen Werten die Struktur sowie die Amplitude deutlich überschätzt. rnFür zeitlich sehr hoch aufgelöste Niederschlagsvorhersagen gewinnt der Einfluss der zeitlichen Fehler immer mehr an Bedeutung. Durch die Optimierung/Minimierung der L Komponente von SAL innerhalb eines Zeitfensters(+/-3h) mit dem Beobachtungszeit-punkt im Zentrum ist es möglich diese zu bestimmen. Es wird gezeigt, dass bei optimalem Zeitversatz die Struktur und Amplitude der QNVs für das COSMO-DE besser werden und damit die grundsätzliche Fähigkeit des Modells die Niederschlagsverteilung realistischer zu simulieren, besser gezeigt werden kann.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite widespread use of species-area relationships (SARs), dispute remains over the most representative SAR model. Using data of small-scale SARs of Estonian dry grassland communities, we address three questions: (1) Which model describes these SARs best when known artifacts are excluded? (2) How do deviating sampling procedures (marginal instead of central position of the smaller plots in relation to the largest plot; single values instead of average values; randomly located subplots instead of nested subplots) influence the properties of the SARs? (3) Are those effects likely to bias the selection of the best model? Our general dataset consisted of 16 series of nested-plots (1 cm(2)-100 m(2), any-part system), each of which comprised five series of subplots located in the four corners and the centre of the 100-m(2) plot. Data for the three pairs of compared sampling designs were generated from this dataset by subsampling. Five function types (power, quadratic power, logarithmic, Michaelis-Menten, Lomolino) were fitted with non-linear regression. In some of the communities, we found extremely high species densities (including bryophytes and lichens), namely up to eight species in 1 cm(2) and up to 140 species in 100 m(2), which appear to be the highest documented values on these scales. For SARs constructed from nested-plot average-value data, the regular power function generally was the best model, closely followed by the quadratic power function, while the logarithmic and Michaelis-Menten functions performed poorly throughout. However, the relative fit of the latter two models increased significantly relative to the respective best model when the single-value or random-sampling method was applied, however, the power function normally remained far superior. These results confirm the hypothesis that both single-value and random-sampling approaches cause artifacts by increasing stochasticity in the data, which can lead to the selection of inappropriate models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The jumbo flying squid, Dosidicus gigas, support an important squid fishery off the Exclusive Economic Zone of Chilean waters. However, we only have limited information about their biology. In this study, age, growth and population structure of D. gigas were studied using statoliths from 333 specimens (386 females and 147 males) randomly sampled in the Chinese squid jigging surveys from 2007 to 2008 off the Exclusive Economic Zone of Chile. Mantle lengths (MLs) of the sample ranged from 206 to 702 mm, and their ages were estimated from 150 to 307 days for females and from 127 to 302 days for males. At least two spawning groups were identified, the main spawning peak tended to occur between August and November (austral spring group), and the secondary peak appeared during March to June (austral autumn group). The ML-age relationship was best modelled by a linear function for the austral spring group and a power function for the austral autumn group, and the body weight (BW)-age relationship was best described by an exponential function for both the groups. Instantaneous relative growth rates and absolute growth rates for ML and BW did not differ significantly between the two groups. The growth rate of D. gigas tended to be high at young stages, and then decreased after the sub-adult stage (>180 days old). This study suggests large spatial and temporal variability in key life history parameters of D. gigas, calling for the collection of more data with fine spatial and temporal scales to further improve our understanding of the fishery biology of D. gigas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes asymptotically optimal tests for unstable parameter process under the feasible circumstance that the researcher has little information about the unstable parameter process and the error distribution, and suggests conditions under which the knowledge of those processes does not provide asymptotic power gains. I first derive a test under known error distribution, which is asymptotically equivalent to LR tests for correctly identified unstable parameter processes under suitable conditions. The conditions are weak enough to cover a wide range of unstable processes such as various types of structural breaks and time varying parameter processes. The test is then extended to semiparametric models in which the underlying distribution in unknown but treated as unknown infinite dimensional nuisance parameter. The semiparametric test is adaptive in the sense that its asymptotic power function is equivalent to the power envelope under known error distribution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bio-optical characteristics of phytoplankton have been observed during two-year monitoring in the western Black Sea. High variability in light absorption coefficient of phytoplankton was due to change of pigment concentration and chlorophyll a specific absorption coefficient. A relationships between light absorption coefficients and chlorophyll a concentration have been found: for the blue maximum (a_ph(440) = 0.0413x**0.628; R**2 = 0.63) and for the red maximum (?_ph(678) = 0.0190x**0.843; R**2 = 0.83). Chlorophyll a specific absorption coefficients decreased while pigment concentration in the Sea increased. Observed variability in chlorophyll a specific absorption coefficient at chlorophyll a concentrations <1.0 mg/m**3 had seasonal features and was related with seasonal change of intracellular pigment concentration. Ratio between the blue and red maxima decreased with increasing chlorophyll a concentration (? = 2.14 x**-0.20; R**2 = 0.41). Variability of spectrally averaged absorption coefficient of phytoplankton (a'_ph ) on 95% depended on absorption coefficient at the blue maximum (y = 0.421x; R**2 = 0.95). Relation of a_ph with chlorophyll a concentration was described by a power function (y = 0.0173x**0.0709; R**2 = 0.65). Change of spectra shape was generally effected by seasonal dynamics of intracellular pigment concentration, and partly effected by taxonomic and cell-size structure of phytoplankton.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The high integration density of current nanometer technologies allows the implementation of complex floating-point applications in a single FPGA. In this work the intrinsic complexity of floating-point operators is addressed targeting configurable devices and making design decisions providing the most suitable performance-standard compliance trade-offs. A set of floating-point libraries composed of adder/subtracter, multiplier, divisor, square root, exponential, logarithm and power function are presented. Each library has been designed taking into account special characteristics of current FPGAs, and with this purpose we have adapted the IEEE floating-point standard (software-oriented) to a custom FPGA-oriented format. Extended experimental results validate the design decisions made and prove the usefulness of reducing the format complexity

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El objetivo principal de esta tesis es el desarrollo de herramientas numéricas basadas en técnicas de onda completa para el diseño asistido por ordenador (Computer-Aided Design,‘CAD’) de dispositivos de microondas. En este contexto, se desarrolla una herramienta numérica basada en el método de los elementos finitos para el diseño y análisis de antenas impresas mediante algoritmos de optimización. Esta técnica consiste en dividir el análisis de una antena en dos partes. Una parte de análisis 3D que se realiza sólo una vez en cada punto de frecuencia de la banda de funcionamiento donde se sustituye una superficie que contiene la metalización del parche por puertas artificiales. En una segunda parte se inserta entre las puertas artificiales en la estructura 3D la superficie soportando una metalización y se procede un análisis 2D para caracterizar el comportamiento de la antena. La técnica propuesta en esta tesis se puede implementar en un algoritmo de optimización para definir el perfil de la antena que permite conseguir los objetivos del diseño. Se valida experimentalmente dicha técnica empleándola en el diseño de antenas impresas de banda ancha para diferentes aplicaciones mediante la optimización del perfil de los parches. También, se desarrolla en esta tesis un procedimiento basado en el método de descomposición de dominio y el método de los elementos finitos para el diseño de dispositivos pasivos de microonda. Se utiliza este procedimiento en particular para el diseño y sintonía de filtros de microondas. En la primera etapa de su aplicación se divide la estructura que se quiere analizar en subdominios aplicando el método de descomposición de dominio, este proceso permite analizar cada segmento por separado utilizando el método de análisis adecuado dado que suele haber subdominios que se pueden analizar mediante métodos analíticos por lo que el tiempo de análisis es más reducido. Se utilizan métodos numéricos para analizar los subdominios que no se pueden analizar mediante métodos analíticos. En esta tesis, se utiliza el método de los elementos finitos para llevar a cabo el análisis. Además de la descomposición de dominio, se aplica un proceso de barrido en frecuencia para reducir los tiempos del análisis. Como método de orden reducido se utiliza la técnica de bases reducidas. Se ha utilizado este procedimiento para diseñar y sintonizar varios ejemplos de filtros con el fin de comprobar la validez de dicho procedimiento. Los resultados obtenidos demuestran la utilidad de este procedimiento y confirman su rigurosidad, precisión y eficiencia en el diseño de filtros de microondas. ABSTRACT The main objective of this thesis is the development of numerical tools based on full-wave techniques for computer-aided design ‘CAD’ of microwave devices. In this context, a numerical technique based on the finite element method ‘FEM’ for the design and analysis of printed antennas using optimization algorithms has been developed. The proposed technique consists in dividing the analysis of the antenna in two stages. In the first stage, the regions of the antenna which do not need to be modified during the CAD process are initially characterized only once from their corresponding matrix transfer function (Generalized Admittance matrix, ‘GAM’). The regions which will be modified are defined as artificial ports, precisely the regions which will contain the conducting surfaces of the printed antenna. In a second stage, the contour shape of the conducting surfaces of the printed antenna is iteratively modified in order to achieve a desired electromagnetic performance of the antenna. In this way, a new GAM of the radiating device which takes into account each printed antenna shape is computed after each iteration. The proposed technique can be implemented with a genetic algorithm to achieve the design objectives. This technique is validated experimentally and applied to the design of wideband printed antennas for different applications by optimizing the shape of the radiating device. In addition, a procedure based on the domain decomposition method and the finite element method has been developed for the design of microwave passive devices. In particular, this procedure can be applied to the design and tune of microwave filters. In the first stage of its implementation, the structure to be analyzed is divided into subdomains using the domain decomposition method; this process allows each subdomains can be analyzed separately using suitable analysis method, since there is usually subdomains that can be analyzed by analytical methods so that the time of analysis is reduced. For analyzing the subdomains that cannot be analyzed by analytical methods, we use the numerical methods. In this thesis, the FEM is used to carry out the analysis. Furthermore the decomposition of the domain, a frequency sweep process is applied to reduce analysis times. The reduced order model as the reduced basis technique is used in this procedure. This procedure is applied to the design and tune of several examples of microwave filters in order to check its validity. The obtained results allow concluding the usefulness of this procedure and confirming their thoroughness, accuracy and efficiency for the design of microwave filters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The high performance and capacity of current FPGAs makes them suitable as acceleration co-processors. This article studies the implementation, for such accelerators, of the floating-point power function xy as defined by the C99 and IEEE 754-2008 standards, generalized here to arbitrary exponent and mantissa sizes. Last-bit accuracy at the smallest possible cost is obtained thanks to a careful study of the various subcomponents: a floating-point logarithm, a modified floating-point exponential, and a truncated floating-point multiplier. A parameterized architecture generator in the open-source FloPoCo project is presented in details and evaluated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper illustrates how to design a visual experiment to measure color differences in gonioapparent materials and how to assess the merits of different advanced color-difference formulas trying to predict the results of such experiment. Successful color-difference formulas are necessary for industrial quality control and artificial color-vision applications. A color- difference formula must be accurate under a wide variety of experimental conditions including the use of challenging materials like, for example, gonioapparent samples. Improving the experimental design in a previous paper [Melgosaet al., Optics Express 22, 3458-3467 (2014)], we have tested 11 advanced color-difference formulas from visual assessments performed by a panel of 11 observers with normal colorvision using a set of 56 nearly achromatic colorpairs of automotive gonioapparent samples. Best predictions of our experimental results were found for the AUDI2000 color-difference formula, followed by color-difference formulas based on the color appearance model CIECAM02. Parameters in the original weighting function for lightness in the AUDI2000 formula were optimized obtaining small improvements. However, a power function from results provided by the AUDI2000 formula considerably improved results, producing values close to the inter-observer variability in our visual experiment. Additional research is required to obtain a modified AUDI2000 color-difference formula significantly better than the current one.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Influential models of edge detection have generally supposed that an edge is detected at peaks in the 1st derivative of the luminance profile, or at zero-crossings in the 2nd derivative. However, when presented with blurred triangle-wave images, observers consistently marked edges not at these locations, but at peaks in the 3rd derivative. This new phenomenon, termed ‘Mach edges’ persisted when a luminance ramp was added to the blurred triangle-wave. Modelling of these Mach edge detection data required the addition of a physiologically plausible filter, prior to the 3rd derivative computation. A viable alternative model was examined, on the basis of data obtained with short-duration, high spatial-frequency stimuli. Detection and feature-making methods were used to examine the perception of Mach bands in an image set that spanned a range of Mach band detectabilities. A scale-space model that computed edge and bar features in parallel provided a better fit to the data than 4 competing models that combined information across scale in a different manner, or computed edge or bar features at a single scale. The perception of luminance bars was examined in 2 experiments. Data for one image-set suggested a simple rule for perception of a small Gaussian bar on a larger inverted Gaussian bar background. In previous research, discriminability (d’) has typically been reported to be a power function of contrast, where the exponent (p) is 2 to 3. However, using bar, grating, and Gaussian edge stimuli, with several methodologies, values of p were obtained that ranged from 1 to 1.7 across 6 experiments. This novel finding was explained by appealing to low stimulus uncertainty, or a near-linear transducer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In human (D. H. Baker, T. S. Meese, & R. J. Summers, 2007b) and in cat (B. Li, M. R. Peterson, J. K. Thompson, T. Duong, & R. D. Freeman, 2005; F. Sengpiel & V. Vorobyov, 2005) there are at least two routes to cross-orientation suppression (XOS): a broadband, non-adaptable, monocular (within-eye) pathway and a more narrowband, adaptable interocular (between the eyes) pathway. We further characterized these two routes psychophysically by measuring the weight of suppression across spatio-temporal frequency for cross-oriented pairs of superimposed flickering Gabor patches. Masking functions were normalized to unmasked detection thresholds and fitted by a two-stage model of contrast gain control (T. S. Meese, M. A. Georgeson, & D. H. Baker, 2006) that was developed to accommodate XOS. The weight of monocular suppression was a power function of the scalar quantity ‘speed’ (temporal-frequency/spatial-frequency). This weight can be expressed as the ratio of non-oriented magno- and parvo-like mechanisms, permitting a fast-acting, early locus, as befits the urgency for action associated with high retinal speeds. In contrast, dichoptic-masking functions superimposed. Overall, this (i) provides further evidence for dissociation between the two forms of XOS in humans, and (ii) indicates that the monocular and interocular varieties of XOS are space/time scale-dependent and scale-invariant, respectively. This suggests an image-processing role for interocular XOS that is tailored to natural image statistics—very different from that of the scale-dependent (speed-dependent) monocular variety.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The basic matrixes method is suggested for the Leontief model analysis (LM) with some of its components indistinctly given. LM can be construed as a forecast task of product’s expenses-output on the basis of the known statistic information at indistinctly given several elements’ meanings of technological matrix, restriction vector and variables’ limits. Elements of technological matrix, right parts of restriction vector LM can occur as functions of some arguments. In this case the task’s dynamic analog occurs. LM essential complication lies in inclusion of variables restriction and criterion function in it.