988 resultados para Dynamic line rating


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inverters play key roles in connecting sustainable energy (SE) sources to the local loads and the ac grid. Although there has been a rapid expansion in the use of renewable sources in recent years, fundamental research, on the design of inverters that are specialized for use in these systems, is still needed. Recent advances in power electronics have led to proposing new topologies and switching patterns for single-stage power conversion, which are appropriate for SE sources and energy storage devices. The current source inverter (CSI) topology, along with a newly proposed switching pattern, is capable of converting the low dc voltage to the line ac in only one stage. Simple implementation and high reliability, together with the potential advantages of higher efficiency and lower cost, turns the so-called, single-stage boost inverter (SSBI), into a viable competitor to the existing SE-based power conversion technologies.^ The dynamic model is one of the most essential requirements for performance analysis and control design of any engineering system. Thus, in order to have satisfactory operation, it is necessary to derive a dynamic model for the SSBI system. However, because of the switching behavior and nonlinear elements involved, analysis of the SSBI is a complicated task.^ This research applies the state-space averaging technique to the SSBI to develop the state-space-averaged model of the SSBI under stand-alone and grid-connected modes of operation. Then, a small-signal model is derived by means of the perturbation and linearization method. An experimental hardware set-up, including a laboratory-scaled prototype SSBI, is built and the validity of the obtained models is verified through simulation and experiments. Finally, an eigenvalue sensitivity analysis is performed to investigate the stability and dynamic behavior of the SSBI system over a typical range of operation. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.

For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.

Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.

Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.

In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.

For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.

Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urban problems have several features that make them inherently dynamic. Large transaction costs all but guarantee that homeowners will do their best to consider how a neighborhood might change before buying a house. Similarly, stores face large sunk costs when opening, and want to be sure that their investment will pay off in the long run. In line with those concerns, different areas of Economics have made recent advances in modeling those questions within a dynamic framework. This dissertation contributes to those efforts.

Chapter 2 discusses how to model an agent’s location decision when the agent must learn about an exogenous amenity that may be changing over time. The model is applied to estimating the marginal willingness to pay to avoid crime, in which agents are learning about the crime rate in a neighborhood, and the crime rate can change in predictable (Markovian) ways.

Chapters 3 and 4 concentrate on location decision problems when there are externalities between decision makers. Chapter 3 focuses on the decision of business owners to open a store, when its demand is a function of other nearby stores, either through competition, or through spillovers on foot traffic. It uses a dynamic model in continuous time to model agents’ decisions. A particular challenge is isolating the contribution of spillovers from the contribution of other unobserved neighborhood attributes that could also lead to agglomeration. A key contribution of this chapter is showing how we can use information on storefront ownership to help separately identify spillovers.

Finally, chapter 4 focuses on a class of models in which families prefer to live

close to similar neighbors. This chapter provides the first simulation of such a model in which agents are forward looking, and shows that this leads to more segregation than it would have been observed with myopic agents, which is the standard in this literature. The chapter also discusses several extensions of the model that can be used to investigate relevant questions such as the arrival of a large contingent high skilled tech workers in San Francisco, the immigration of hispanic families to several southern American cities, large changes in local amenities, such as the construction of magnet schools or metro stations, and the flight of wealthy residents from cities in the Rust belt, such as Detroit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical regression analysis can be used to model time series. However, the assumption that model parameters are constant over time is not necessarily adapted to the data. In phytoplankton ecology, the relevance of time-varying parameter values has been shown using a dynamic linear regression model (DLRM). DLRMs, belonging to the class of Bayesian dynamic models, assume the existence of a non-observable time series of model parameters, which are estimated on-line, i.e. after each observation. The aim of this paper was to show how DLRM results could be used to explain variation of a time series of phytoplankton abundance. We applied DLRM to daily concentrations of Dinophysis cf. acuminata, determined in Antifer harbour (French coast of the English Channel), along with physical and chemical covariates (e.g. wind velocity, nutrient concentrations). A single model was built using 1989 and 1990 data, and then applied separately to each year. Equivalent static regression models were investigated for the purpose of comparison. Results showed that most of the Dinophysis cf. acuminata concentration variability was explained by the configuration of the sampling site, the wind regime and tide residual flow. Moreover, the relationships of these factors with the concentration of the microalga varied with time, a fact that could not be detected with static regression. Application of dynamic models to phytoplankton time series, especially in a monitoring context, is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the wide swath of applications where multiphase fluid contact lines exist, there is still no consensus on an accurate and general simulation methodology. Most prior numerical work has imposed one of the many dynamic contact-angle theories at solid walls. Such approaches are inherently limited by the theory accuracy. In fact, when inertial effects are important, the contact angle may be history dependent and, thus, any single mathematical function is inappropriate. Given these limitations, the present work has two primary goals: 1) create a numerical framework that allows the contact angle to evolve naturally with appropriate contact-line physics and 2) develop equations and numerical methods such that contact-line simulations may be performed on coarse computational meshes.

Fluid flows affected by contact lines are dominated by capillary stresses and require accurate curvature calculations. The level set method was chosen to track the fluid interfaces because it is easy to calculate interface curvature accurately. Unfortunately, the level set reinitialization suffers from an ill-posed mathematical problem at contact lines: a ``blind spot'' exists. Standard techniques to handle this deficiency are shown to introduce parasitic velocity currents that artificially deform freely floating (non-prescribed) contact angles. As an alternative, a new relaxation equation reinitialization is proposed to remove these spurious velocity currents and its concept is further explored with level-set extension velocities.

To capture contact-line physics, two classical boundary conditions, the Navier-slip velocity boundary condition and a fixed contact angle, are implemented in direct numerical simulations (DNS). DNS are found to converge only if the slip length is well resolved by the computational mesh. Unfortunately, since the slip length is often very small compared to fluid structures, these simulations are not computationally feasible for large systems. To address the second goal, a new methodology is proposed which relies on the volumetric-filtered Navier-Stokes equations. Two unclosed terms, an average curvature and a viscous shear VS, are proposed to represent the missing microscale physics on a coarse mesh.

All of these components are then combined into a single framework and tested for a water droplet impacting a partially-wetting substrate. Very good agreement is found for the evolution of the contact diameter in time between the experimental measurements and the numerical simulation. Such comparison would not be possible with prior methods, since the Reynolds number Re and capillary number Ca are large. Furthermore, the experimentally approximated slip length ratio is well outside of the range currently achievable by DNS. This framework is a promising first step towards simulating complex physics in capillary-dominated flows at a reasonable computational expense.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentation Research of the Practicum and externships has a long history and involves important aspects for analysis. For example, the recent changes taking place in university grades allot more credits to the Practicum course in all grades, and the Company-University collaboration has exposed the need to study in new learning environments. The rise of ICT practices like ePortfolios, which require technological solutions and methods supported by experimentation, study and research, require particular examination due to the dynamic momentum of technological innovation. Tutoring the Practicum and externships requires remote monitoring and communication using ePortfolios, and competence-based assessment and students’ requirement to provide evidence of learning require the best tutoring methods available with ePortfolios. Among the elements of ePortfolios, eRubrics emerge as a tool for design, communication and competence-assessment. This project aims to consolidate a research line on eRubrics, already undertaken by another project -I+D+i [EDU2010-15432]- in order to expand the network of researchers and Centres of Excellence in Spain and other countries: Harvard University in USA, University of Cologne in Germany, University of Colima in Mexico, Federal University of Parana, University of Santa Catarina in Brasil, and Stockholm University in Sweden(1). This new project [EDU2013-41974-P](2) examines the impact of eRubrics on tutoring and on assessing the Practicum course and externships. Through technology, distance tutoring grants an extra dimension to human communication. New forms of teaching with technological mediation are on the rise and are highly valuable, not only for formal education but especially in both public and private sectors of non-formal education, such as occupational training, unemployed education and public servant training. Objectives Obj. 1. To analyse models of technology used in assessing learning in the Practicum of all grades at Spanish Faculties of Education. Obj. 2. To study models of learning assessment measured by eRubrics in the Practicum. Obj. 3. To analyse communication through eRubrics between students and their tutors at university and practice centres, focusing on students’ understanding of competences and evidences to be assessed in the Practicum. Obj. 4. To design assessment services and products, in order to federate companies and practice centres with training institutions. Among many other features, it has the following functions CoRubric(3) 1. The possibility to assess people, products or services by using rubrics. 2. Ipsative assessment. 3. Designing fully flexible rubrics. 4. Drafting reports and exporting results from eRubrics in a project. 5. Students and teachers talk about the evaluation and application of the criteria Methodology, Methods, Research Instruments or Sources Used The project will use techniques to collect and analyse data from two methodological approaches: 1. In order to meet the first objective, we suggest an initial exploratory descriptive study (Buendía Eisman, Colás Bravo & Hernández Pina, 1998), which involves conducting interviews with Practicum coordinators from all educational grades across Spain, as well as analysing the contents of the teaching guides used in all educational grades across Spain. 55 academic managers were interviewed from about 10 faculties of education in public universities in Spain (20%), and course guides 376 universities from 36 public institutions in Spain (72%) are analyzed. 2. In order to satisfy the second objective, 7 universities have been selected to implement the project two instruments aimed at tutors practice centers and tutors of the faculty. All instruments for collecting data were validated by experts using the Delphi method. The selection of experts had three aspects: years of professional experience, number and quality of publications in the field (Practicum, Educational Technology and Teacher Training), and self-rating of their knowledge. The resulting data was calculated using the Coefficient of Competence (Kcomp) (Martínez, Zúñiga, Sala & Meléndez, 2012). Results in all cases showed an average experience of more than 0.09 points. The two instruments of the first objective were validated during the first half of 2014-15 year, data collected during the second half. And the second objective during the first half of 2015-16 year and data collection for the second half. The set of four instruments (two for each objective 1 and 2) have the same dimensions as each of the sources (Coordinators, course guides, tutors of practice centers and faculty) as they were: a. Institution-Organization, b. Nature of internships, c. Relationship between agents, d. Management Practicum, e. Assessment. F. Technological support, g. Training and h. Assessment Ethics. Conclusions, Expected Outcomes or Findings The first results respond to Objective 1, where we find different conclusions depending on each of the six dimensions. In the case of internal regulations governing the organization and structure of the practicum, we note that most traditional degrees (Elementary and Primary grades) share common internal rules, in particular development methodology and criteria against other grades (Pedagogy and Social Education ). It is also true that the centers of practices in last cases are very different from each other and can be a public institution, a school, a company, a museum, etc. The memory with a 56.34% and 43.67% daily activities are more demands on students in all degrees, Lesson plans 28.18% 19.72% Portfolio 26.7% Didactic units and Others 32,4%. The technical support has been mainly used the platform of the University 47.89% and 57.75% Email, followed by other services and tools 9.86% and rubric platforms 1.41%. The assessment criteria are divided between formal aspects of 12.38%, Written expresión 12.38%, treatment of the subject 14.45%, methodological rigor of work 10.32%, and Level of argument Clarity and relevance of conclusions 10.32%. In general terms, we could say that there is a trend and debate between formative assessment against a accreditation. It has not yet had sufficient time to further study and confront other dimensions and sources of information. We hope to provide more analysis and conclusions in the conference date.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As research into the dynamic characteristics of job performance across time has continued to accumulate, associated implications for performance appraisal have become evident. At present, several studies have demonstrated that systematic trends in job performance across time influence how performance is ultimately judged. However, little research has considered the processes by which the performance trend-performance rating relationship occurs. In the present study, I addressed this gap. Specifically, drawing on attribution theory, I proposed and tested a model whereby the performance trend-performance rating relationship occurs through attributions to ability and effort. The results of this study indicated that attributions to ability, but not effort, mediate the relationship between performance trend and performance ratings and that this relationship depends on attribution-related cues. Implications for performance appraisal research and theory are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Graphene and carbon nanotube nanocomposite (GCN) was synthesised and applied in gene transfection of pIRES plasmid conjugated with green fluorescent protein (GFP) in NIH-3T3 and NG97 cell lines. The tips of the multi-walled carbon nanotubes (MWCNTs) were exfoliated by oxygen plasma etching, which is also known to attach oxygen content groups on the MWCNT surfaces, changing their hydrophobicity. The nanocomposite was characterised by high resolution scanning electron microscopy; energy-dispersive X-ray, Fourier transform infrared and Raman spectroscopies, as well as zeta potential and particle size analyses using dynamic light scattering. BET adsorption isotherms showed the GCN to have an effective surface area of 38.5m(2)/g. The GCN and pIRES plasmid conjugated with the GFP gene, forming π-stacking when dispersed in water by magnetic stirring, resulting in a helical wrap. The measured zeta potential confirmed that the plasmid was connected to the nanocomposite. The NIH-3T3 and NG97 cell lines could phagocytize this wrap. The gene transfection was characterised by fluorescent protein produced in the cells and pictured by fluorescent microscopy. Before application, we studied GCN cell viability in NIH-3T3 and NG97 line cells using both MTT and Neutral Red uptake assays. Our results suggest that GCN has moderate stability behaviour as colloid solution and has great potential as a gene carrier agent in non-viral based therapy, with low cytotoxicity and good transfection efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To verify whether fluorescence in situ hybridization (FISH) of cells from the buccal epithelium could be employed to detect cryptomosaicism with a 45,X lineage in 46,XY patients. Samples of nineteen 46,XY healthy young men and five patients with disorders of sex development (DSD), four 45,X/46,XY and one 46,XY were used. FISH analysis with X and Y specific probes on interphase nuclei from blood lymphocytes and buccal epithelium were analyzed to investigate the proportion of nuclei containing only the signal of the X chromosome. The frequency of nuclei containing only the X signal in the two tissues of healthy men did not differ (p = 0.69). In all patients with DSD this frequency was significantly higher, and there was no difference between the two tissues (p = 0.38), either. Investigation of mosaicism with a 45,X cell line in patients with 46,XY DSD or sterility can be done by FISH directly using cells from the buccal epithelium.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current data indicate that the size of high-density lipoprotein (HDL) may be considered an important marker for cardiovascular disease risk. We established reference values of mean HDL size and volume in an asymptomatic representative Brazilian population sample (n=590) and their associations with metabolic parameters by gender. Size and volume were determined in HDL isolated from plasma by polyethyleneglycol precipitation of apoB-containing lipoproteins and measured using the dynamic light scattering (DLS) technique. Although the gender and age distributions agreed with other studies, the mean HDL size reference value was slightly lower than in some other populations. Both HDL size and volume were influenced by gender and varied according to age. HDL size was associated with age and HDL-C (total population); non- white ethnicity and CETP inversely (females); HDL-C and PLTP mass (males). On the other hand, HDL volume was determined only by HDL-C (total population and in both genders) and by PLTP mass (males). The reference values for mean HDL size and volume using the DLS technique were established in an asymptomatic and representative Brazilian population sample, as well as their related metabolic factors. HDL-C was a major determinant of HDL size and volume, which were differently modulated in females and in males.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study is to verify the dynamics between fiscal policy, measured by public debt, and monetary policy, measured by a reaction function of a central bank. Changes in monetary policies due to deviations from their targets always generate fiscal impacts. We examine two policy reaction functions: the first related to inflation targets and the second related to economic growth targets. We find that the condition for stable equilibrium is more restrictive in the first case than in the second. We then apply our simulation model to Brazil and United Kingdom and find that the equilibrium is unstable in the Brazilian case but stable in the UK case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Riboflavin (vitamin B2) is a precursor for coenzymes involved in energy production, biosynthesis, detoxification, and electron scavenging. Previously, we demonstrated that irradiated riboflavin (IR) has potential antitumoral effects against human leukemia cells (HL60), human prostate cancer cells (PC3), and mouse melanoma cells (B16F10) through a common mechanism that leads to apoptosis. Hence, we here investigated the effect of IR on 786-O cells, a known model cell line for clear cell renal cell carcinoma (CCRCC), which is characterized by high-risk metastasis and chemotherapy resistance. IR also induced cell death in 786-O cells by apoptosis, which was not prevented by antioxidant agents. IR treatment was characterized by downregulation of Fas ligand (TNF superfamily, member 6)/Fas (TNF receptor superfamily member 6) (FasL/Fas) and tumor necrosis factor receptor superfamily, member 1a (TNFR1)/TNFRSF1A-associated via death domain (TRADD)/TNF receptor-associated factor 2 (TRAF) signaling pathways (the extrinsic apoptosis pathway), while the intrinsic apoptotic pathway was upregulated, as observed by an elevated Bcl-2 associated x protein/B-cell CLL/lymphoma 2 (Bax/Bcl-2) ratio, reduced cellular inhibitor of apoptosis 1 (c-IAP1) expression, and increased expression of apoptosis-inducing factor (AIF). The observed cell death was caspase-dependent as proven by caspase 3 activation and poly(ADP-ribose) polymerase-1 (PARP) cleavage. IR-induced cell death was also associated with downregulation of v-src sarcoma (Schmidt-Ruppin A-2) viral oncogene homologue (avian)/protein serine/threonine kinase B/extracellular signal-regulated protein kinase 1/2 (Src/AKT/ERK1/2) pathway and activation of p38 MAP kinase (p38) and Jun-amino-terminal kinase (JNK). Interestingly, IR treatment leads to inhibition of matrix metalloproteinase-2 (MMP-2) activity and reduced expression of renal cancer aggressiveness markers caveolin-1, low molecular weight phosphotyrosine protein phosphatase (LMWPTP), and kinase insert domain receptor (a type III receptor tyrosine kinase) (VEGFR-2). Together, these results show the potential of IR for treating cancer.