968 resultados para Time-frequency distribution
Resumo:
We present new methodologies to generate rational function approximations of broadband electromagnetic responses of linear and passive networks of high-speed interconnects, and to construct SPICE-compatible, equivalent circuit representations of the generated rational functions. These new methodologies are driven by the desire to improve the computational efficiency of the rational function fitting process, and to ensure enhanced accuracy of the generated rational function interpolation and its equivalent circuit representation. Toward this goal, we propose two new methodologies for rational function approximation of high-speed interconnect network responses. The first one relies on the use of both time-domain and frequency-domain data, obtained either through measurement or numerical simulation, to generate a rational function representation that extrapolates the input, early-time transient response data to late-time response while at the same time providing a means to both interpolate and extrapolate the used frequency-domain data. The aforementioned hybrid methodology can be considered as a generalization of the frequency-domain rational function fitting utilizing frequency-domain response data only, and the time-domain rational function fitting utilizing transient response data only. In this context, a guideline is proposed for estimating the order of the rational function approximation from transient data. The availability of such an estimate expedites the time-domain rational function fitting process. The second approach relies on the extraction of the delay associated with causal electromagnetic responses of interconnect systems to provide for a more stable rational function process utilizing a lower-order rational function interpolation. A distinctive feature of the proposed methodology is its utilization of scattering parameters. For both methodologies, the approach of fitting the electromagnetic network matrix one element at a time is applied. It is shown that, with regard to the computational cost of the rational function fitting process, such an element-by-element rational function fitting is more advantageous than full matrix fitting for systems with a large number of ports. Despite the disadvantage that different sets of poles are used in the rational function of different elements in the network matrix, such an approach provides for improved accuracy in the fitting of network matrices of systems characterized by both strongly coupled and weakly coupled ports. Finally, in order to provide a means for enforcing passivity in the adopted element-by-element rational function fitting approach, the methodology for passivity enforcement via quadratic programming is modified appropriately for this purpose and demonstrated in the context of element-by-element rational function fitting of the admittance matrix of an electromagnetic multiport.
Resumo:
Understanding how aquatic species grow is fundamental in fisheries because stock assessment often relies on growth dependent statistical models. Length-frequency-based methods become important when more applicable data for growth model estimation are either not available or very expensive. In this article, we develop a new framework for growth estimation from length-frequency data using a generalized von Bertalanffy growth model (VBGM) framework that allows for time-dependent covariates to be incorporated. A finite mixture of normal distributions is used to model the length-frequency cohorts of each month with the means constrained to follow a VBGM. The variances of the finite mixture components are constrained to be a function of mean length, reducing the number of parameters and allowing for an estimate of the variance at any length. To optimize the likelihood, we use a minorization–maximization (MM) algorithm with a Nelder–Mead sub-step. This work was motivated by the decline in catches of the blue swimmer crab (BSC) (Portunus armatus) off the east coast of Queensland, Australia. We test the method with a simulation study and then apply it to the BSC fishery data.
Resumo:
Nitrous oxide (N2O) emissions from soil are often measured using the manual static chamber method. Manual gas sampling is labour intensive, so a minimal sampling frequency that maintains the accuracy of measurements would be desirable. However, the high temporal (diurnal, daily and seasonal) variabilities of N2O emissions can compromise the accuracy of measurements if not addressed adequately when formulating a sampling schedule. Assessments of sampling strategies to date have focussed on relatively low emission systems with high episodicity, where a small number of the highest emission peaks can be critically important in the measurement of whole season cumulative emissions. Using year-long, automated sub-daily N2O measurements from three fertilised sugarcane fields, we undertook an evaluation of the optimum gas sampling strategies in high emission systems with relatively long emission episodes. The results indicated that sampling in the morning between 09:00–12:00, when soil temperature was generally close to the daily average, best approximated the daily mean N2O emission within 4–7% of the ‘actual’ daily emissions measured by automated sampling. Weekly sampling with biweekly sampling for one week after >20 mm of rainfall was the recommended sampling regime. It resulted in no extreme (>20%) deviations from the ‘actuals’, had a high probability of estimating the annual cumulative emissions within 10% precision, with practicable sampling numbers in comparison to other sampling regimes. This provides robust and useful guidance for manual gas sampling in sugarcane cropping systems, although further adjustments by the operators in terms of expected measurement accuracy and resource availability are encouraged. By implementing these sampling strategies together, labour inputs and errors in measured cumulative N2O emissions can be minimised. Further research is needed to quantify the spatial variability of N2O emissions within sugarcane cropping and to develop techniques for effectively addressing both spatial and temporal variabilities simultaneously.
Resumo:
For derived flood frequency analysis based on hydrological modelling long continuous precipitation time series with high temporal resolution are needed. Often, the observation network with recording rainfall gauges is poor, especially regarding the limited length of the available rainfall time series. Stochastic precipitation synthesis is a good alternative either to extend or to regionalise rainfall series to provide adequate input for long-term rainfall-runoff modelling with subsequent estimation of design floods. Here, a new two step procedure for stochastic synthesis of continuous hourly space-time rainfall is proposed and tested for the extension of short observed precipitation time series. First, a single-site alternating renewal model is presented to simulate independent hourly precipitation time series for several locations. The alternating renewal model describes wet spell durations, dry spell durations and wet spell intensities using univariate frequency distributions separately for two seasons. The dependence between wet spell intensity and duration is accounted for by 2-copulas. For disaggregation of the wet spells into hourly intensities a predefined profile is used. In the second step a multi-site resampling procedure is applied on the synthetic point rainfall event series to reproduce the spatial dependence structure of rainfall. Resampling is carried out successively on all synthetic event series using simulated annealing with an objective function considering three bivariate spatial rainfall characteristics. In a case study synthetic precipitation is generated for some locations with short observation records in two mesoscale catchments of the Bode river basin located in northern Germany. The synthetic rainfall data are then applied for derived flood frequency analysis using the hydrological model HEC-HMS. The results show good performance in reproducing average and extreme rainfall characteristics as well as in reproducing observed flood frequencies. The presented model has the potential to be used for ungauged locations through regionalisation of the model parameters.
Resumo:
Despite recent advances in ocean observing arrays and satellite sensors, there remains great uncertainty in the large-scale spatial variations of upper ocean salinity on the interannual to decadal timescales. Consonant with both broad-scale surface warming and the amplification of the global hydrological cycle, observed global multidecadal salinity changes typically have focussed on the linear response to anthropogenic forcing but not on salinity variations due to changes in the static stability and or variability due to the intrinsic ocean or internal climate processes. Here, we examine the static stability and spatiotemporal variability of upper ocean salinity across a hierarchy of models and reanalyses. In particular, we partition the variance into time bands via application of singular spectral analysis, considering sea surface salinity (SSS), the Brunt Väisälä frequency (N2), and the ocean salinity stratification in terms of the stabilizing effect due to the haline part of N2 over the upper 500m. We identify regions of significant coherent SSS variability, either intrinsic to the ocean or in response to the interannually varying atmosphere. Based on consistency across models (CMIP5 and forced experiments) and reanalyses, we identify the stabilizing role of salinity in the tropics—typically associated with heavy precipitation and barrier layer formation, and the role of salinity in destabilizing upper ocean stratification in the subtropical regions where large-scale density compensation typically occurs.
Resumo:
The service of a critical infrastructure, such as a municipal wastewater treatment plant (MWWTP), is taken for granted until a flood or another low frequency, high consequence crisis brings its fragility to attention. The unique aspects of the MWWTP call for a method to quantify the flood stage-duration-frequency relationship. By developing a bivariate joint distribution model of flood stage and duration, this study adds a second dimension, time, into flood risk studies. A new parameter, inter-event time, is developed to further illustrate the effect of event separation on the frequency assessment. The method is tested on riverine, estuary and tidal sites in the Mid-Atlantic region. Equipment damage functions are characterized by linear and step damage models. The Expected Annual Damage (EAD) of the underground equipment is further estimated by the parametric joint distribution model, which is a function of both flood stage and duration, demonstrating the application of the bivariate model in risk assessment. Flood likelihood may alter due to climate change. A sensitivity analysis method is developed to assess future flood risk by estimating flood frequency under conditions of higher sea level and stream flow response to increased precipitation intensity. Scenarios based on steady and unsteady flow analysis are generated for current climate, future climate within this century, and future climate beyond this century, consistent with the WWTP planning horizons. The spatial extent of flood risk is visualized by inundation mapping and GIS-Assisted Risk Register (GARR). This research will help the stakeholders of the critical infrastructure be aware of the flood risk, vulnerability, and the inherent uncertainty.
Resumo:
Two trends are emerging from modern electric power systems: the growth of renewable (e.g., solar and wind) generation, and the integration of information technologies and advanced power electronics. The former introduces large, rapid, and random fluctuations in power supply, demand, frequency, and voltage, which become a major challenge for real-time operation of power systems. The latter creates a tremendous number of controllable intelligent endpoints such as smart buildings and appliances, electric vehicles, energy storage devices, and power electronic devices that can sense, compute, communicate, and actuate. Most of these endpoints are distributed on the load side of power systems, in contrast to traditional control resources such as centralized bulk generators. This thesis focuses on controlling power systems in real time, using these load side resources. Specifically, it studies two problems.
(1) Distributed load-side frequency control: We establish a mathematical framework to design distributed frequency control algorithms for flexible electric loads. In this framework, we formulate a category of optimization problems, called optimal load control (OLC), to incorporate the goals of frequency control, such as balancing power supply and demand, restoring frequency to its nominal value, restoring inter-area power flows, etc., in a way that minimizes total disutility for the loads to participate in frequency control by deviating from their nominal power usage. By exploiting distributed algorithms to solve OLC and analyzing convergence of these algorithms, we design distributed load-side controllers and prove stability of closed-loop power systems governed by these controllers. This general framework is adapted and applied to different types of power systems described by different models, or to achieve different levels of control goals under different operation scenarios. We first consider a dynamically coherent power system which can be equivalently modeled with a single synchronous machine. We then extend our framework to a multi-machine power network, where we consider primary and secondary frequency controls, linear and nonlinear power flow models, and the interactions between generator dynamics and load control.
(2) Two-timescale voltage control: The voltage of a power distribution system must be maintained closely around its nominal value in real time, even in the presence of highly volatile power supply or demand. For this purpose, we jointly control two types of reactive power sources: a capacitor operating at a slow timescale, and a power electronic device, such as a smart inverter or a D-STATCOM, operating at a fast timescale. Their control actions are solved from optimal power flow problems at two timescales. Specifically, the slow-timescale problem is a chance-constrained optimization, which minimizes power loss and regulates the voltage at the current time instant while limiting the probability of future voltage violations due to stochastic changes in power supply or demand. This control framework forms the basis of an optimal sizing problem, which determines the installation capacities of the control devices by minimizing the sum of power loss and capital cost. We develop computationally efficient heuristics to solve the optimal sizing problem and implement real-time control. Numerical experiments show that the proposed sizing and control schemes significantly improve the reliability of voltage control with a moderate increase in cost.
Resumo:
The Smart Grid needs a large amount of information to be operated and day by day new information is required to improve the operation performance. It is also fundamental that the available information is reliable and accurate. Therefore, the role of metrology is crucial, especially if applied to the distribution grid monitoring and the electrical assets diagnostics. This dissertation aims at better understanding the sensors and the instrumentation employed by the power system operators in the above-mentioned applications and studying new solutions. Concerning the research on the measurement applied to the electrical asset diagnostics: an innovative drone-based measurement system is proposed for monitoring medium voltage surge arresters. This system is described, and its metrological characterization is presented. On the other hand, the research regarding the measurements applied to the grid monitoring consists of three parts. The first part concerns the metrological characterization of the electronic energy meters’ operation under off-nominal power conditions. Original test procedures have been designed for both frequency and harmonic distortion as influence quantities, aiming at defining realistic scenarios. The second part deals with medium voltage inductive current transformers. An in-depth investigation on their accuracy behavior in presence of harmonic distortion is carried out by applying realistic current waveforms. The accuracy has been evaluated by means of the composite error index and its approximated version. Based on the same test setup, a closed-form expression for the measured current total harmonic distortion uncertainty estimation has been experimentally validated. The metrological characterization of a virtual phasor measurement unit is the subject of the third and last part: first, a calibrator has been designed and the uncertainty associated with its steady-state reference phasor has been evaluated; then this calibrator acted as a reference, and it has been used to characterize the phasor measurement unit implemented within a real-time simulator.
Resumo:
In acquired immunodeficiency syndrome (AIDS) studies it is quite common to observe viral load measurements collected irregularly over time. Moreover, these measurements can be subjected to some upper and/or lower detection limits depending on the quantification assays. A complication arises when these continuous repeated measures have a heavy-tailed behavior. For such data structures, we propose a robust structure for a censored linear model based on the multivariate Student's t-distribution. To compensate for the autocorrelation existing among irregularly observed measures, a damped exponential correlation structure is employed. An efficient expectation maximization type algorithm is developed for computing the maximum likelihood estimates, obtaining as a by-product the standard errors of the fixed effects and the log-likelihood function. The proposed algorithm uses closed-form expressions at the E-step that rely on formulas for the mean and variance of a truncated multivariate Student's t-distribution. The methodology is illustrated through an application to an Human Immunodeficiency Virus-AIDS (HIV-AIDS) study and several simulation studies.
Resumo:
Size distributions in woody plant populations have been used to assess their regeneration status, assuming that size structures with reverse-J shapes represent stable populations. We present an empirical approach of this issue using five woody species from the Cerrado. Considering count data for all plants of these five species over a 12-year period, we analyzed size distribution by: a) plotting frequency distributions and their adjustment to the negative exponential curve and b) calculating the Gini coefficient. To look for a relationship between size structure and future trends, we considered the size structures from the first census year. We analyzed changes in number over time and performed a simple population viability analysis, which gives the mean population growth rate, its variance and the probability of extinction in a given time period. Frequency distributions and the Gini coefficient were not able to predict future trends in population numbers. We recommend that managers should not use measures of size structure as a basis for management decisions without applying more appropriate demographic studies.
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física
Resumo:
O câncer de cólon é uma doença de alta prevalência e mortalidade, cujo tratamento baseia-se na ressecção cirúrgica. A possibilidade de cura aumenta com o diagnóstico precoce, daí a importância dos programas de rastreamento populacional do câncer colorretal. O presente estudo analisou, retrospectivamente, 66 pacientes submetidos a ressecções do cólon por neoplasia em um período de 58 meses no Hospital Universitário da Universidade de São Paulo. Os pacientes foram divididos em dois grupos: grupo 1, submetidos a cirurgia eletiva (28 pacientes), e grupo 2, submetidos a cirurgia de urgência (38 pacientes). Os grupos foram comparados com relação às variáveis sexo, idade, apresentação clínica, aspectos da técnica cirúrgica, sítio anatômico da lesão, estádio patológico, taxas de complicações, permanência hospitalar pós-operatória e óbitos na internação. Verificou-se no presente estudo que a idade entre os grupos foi semelhante. Houve uma predominância do sexo masculino entre os pacientes operados de urgência. No grupo de cirurgia eletiva, o principal sintoma foi a hematoquezia, enquanto os operados na urgência, tinham como principal queixa dor abdominal. A grande maioria dos pacientes, no momento da cirurgia, apresentava-se sintomática há meses. Os pacientes operados na urgência apresentaram mais tumores pT4 e os operados eletivamente apresentaram mais neoplasias em estádio I. Em ambos os grupos, o caráter oncológico dos procedimentos foi preservado, bem como foi alto o índice de anastomoses primárias (81,8%). As taxas de complicações pós-operatórias, o tempo de permanência hospitalar pós-operatório e a mortalidade foram semelhantes.
Resumo:
Two known sesquiterpenes (1R*,2S*,3R*,5S*,8S*,9R*)-2,3,5,9-tetramethyltricyclo[6.3.0.0(1,5)]undecan-2-ol and (1S*,2S*,3S*,5S*,8S*,9S*)-2,3,5,9-tetramethyltricyclo-[6.3.0.0(1,5)]undecan-2-ol were isolated for the first time from the essential oil of the red seaweed Laurencia dendroidea collected in the Brazilian coast. These compounds were not active against eight bacteria strains and the yeast Candida albicans, but showed some antioxidant activity. Both compounds were also found in other seaweed species showing that they are not exclusive taxonomic markers to the genus Laurencia.
Resumo:
The connexin 32 (Cx32) is a protein that forms the channels that promote the gap junction intercellular communication (GJIC) in the liver, allowing the diffusion of small molecules through cytosol from cell-to-cell. Hepatic fibrosis is characterized by a disruption of normal tissue architeture by cellular lesions, and may alter the GJIC. This work aimed to study the expression and distribution of Cx32 in liver fibrosis induced by the oral administration of dimethylnitrosamine in female Wistar rats. The necropsy of the rats was carried out after five weeks of drug administration. They presented a hepatic fibrosis state. Sections from livers with fibrosis and from control livers were submitted to immunohistochemical, Real Time-PCR and Western-Blot analysis to Cx32. In fibrotic livers the Cxs were diffusely scattered in the cytoplasm, contrasting with the control livers, where the Cx32 formed junction plaques at the cell membrane. Also it was found a decrease in the gene expression of Cx32 without reduction in the protein quantity when compared with controls. These results suggest that there the mechanism of intercellular communication between hepatocytes was reduced by the fibrotic process, which may predispose to the occurrence of a neoplastic process, taken in account that connexins are considered tumor suppressing genes.