60 resultados para Transactional Distance Theory


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This letter presents an extension of an existing ground distance relay algorithm to include phase distance relays. The algorithm uses a fault resistance estimation process in the phase domain, improving efficiency in the distance protection process. The results show that the algorithm is suitable for online applications, and that it has an independent performance from the fault resistance magnitude, the fault location, and the line asymmetry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents the development and implementation of an artificial neural network based algorithm for transmission lines distance protection. This algorithm was developed to be used in any transmission line regardless of its configuration or voltage level. The described ANN-based algorithm does not need any topology adaptation or ANN parameters adjustment when applied to different electrical systems. This feature makes this solution unique since all ANN-based solutions presented until now were developed for particular transmission lines, which means that those solutions cannot be implemented in commercial relays. (c) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concrete offshore platforms, which are subjected a several loading combinations and, thus, requires an analysis more generic possible, can be designed using the concepts adopted to shell elements, but the resistance must be verify in particular cross-sections to shear forces. This work about design of shell elements will be make using the three-layer shell theory. The elements are subject to combined loading of membrane and plate, totalizing eight components of internal forces, which are three membrane forces, three moments (two out-of-plane bending moments and one in-plane, or torsion, moment) and two shear forces. The design method adopted, utilizing the iterative process proposed by Lourenco & Figueiras (1993) obtained from equations of equilibrium developed by Gupta (1896) , will be compared to results of experimentally tested shell elements found in the literature using the program DIANA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As many countries are moving toward water sector reforms, practical issues of how water management institutions can better effect allocation, regulation, and enforcement of water rights have emerged. The problem of nonavailability of water to tailenders on an irrigation system in developing countries, due to unlicensed upstream diversions is well documented. The reliability of access or equivalently the uncertainty associated with water availability at their diversion point becomes a parameter that is likely to influence the application by users for water licenses, as well as their willingness to pay for licensed use. The ability of a water agency to reduce this uncertainty through effective water rights enforcement is related to the fiscal ability of the agency to monitor and enforce licensed use. In this paper, this interplay across the users and the agency is explored, considering the hydraulic structure or sequence of water use and parameters that define the users and the agency`s economics. The potential for free rider behavior by the users, as well as their proposals for licensed use are derived conditional on this setting. The analyses presented are developed in the framework of the theory of ""Law and Economics,`` with user interactions modeled as a game theoretic enterprise. The state of Ceara, Brazil, is used loosely as an example setting, with parameter values for the experiments indexed to be approximately those relevant for current decisions. The potential for using the ideas in participatory decision making is discussed. This paper is an initial attempt to develop a conceptual framework for analyzing such situations but with a focus on the reservoir-canal system water rights enforcement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper a bond graph methodology is used to model incompressible fluid flows with viscous and thermal effects. The distinctive characteristic of these flows is the role of pressure, which does not behave as a state variable but as a function that must act in such a way that the resulting velocity field has divergence zero. Velocity and entropy per unit volume are used as independent variables for a single-phase, single-component flow. Time-dependent nodal values and interpolation functions are introduced to represent the flow field, from which nodal vectors of velocity and entropy are defined as state variables. The system for momentum and continuity equations is coincident with the one obtained by using the Galerkin method for the weak formulation of the problem in finite elements. The integral incompressibility constraint is derived based on the integral conservation of mechanical energy. The weak formulation for thermal energy equation is modeled with true bond graph elements in terms of nodal vectors of temperature and entropy rates, resulting a Petrov-Galerkin method. The resulting bond graph shows the coupling between mechanical and thermal energy domains through the viscous dissipation term. All kind of boundary conditions are handled consistently and can be represented as generalized effort or flow sources. A procedure for causality assignment is derived for the resulting graph, satisfying the Second principle of Thermodynamics. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The classical approach for acoustic imaging consists of beamforming, and produces the source distribution of interest convolved with the array point spread function. This convolution smears the image of interest, significantly reducing its effective resolution. Deconvolution methods have been proposed to enhance acoustic images and have produced significant improvements. Other proposals involve covariance fitting techniques, which avoid deconvolution altogether. However, in their traditional presentation, these enhanced reconstruction methods have very high computational costs, mostly because they have no means of efficiently transforming back and forth between a hypothetical image and the measured data. In this paper, we propose the Kronecker Array Transform ( KAT), a fast separable transform for array imaging applications. Under the assumption of a separable array, it enables the acceleration of imaging techniques by several orders of magnitude with respect to the fastest previously available methods, and enables the use of state-of-the-art regularized least-squares solvers. Using the KAT, one can reconstruct images with higher resolutions than was previously possible and use more accurate reconstruction techniques, opening new and exciting possibilities for acoustic imaging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The article explores the relationships between distance education, information and communication technologies and teacher education. Its focus is on the interactive media and its uses in an in-service teacher education program, in Brazil, and on the ways the teachers used the technologies doing their own appropriateness. It departs from the presuppositions of the society of knowledge, that is, the close relationships between new technologies, continuing professional development and social inclusion, arguing that this paradigm is an ideological discourse. The article shows how the teachers have used the technologies in creative ways, calling the attention to the importance of this teachers` abilities as a basic skill to facing the challenges of the society of knowledge itself.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyses the applicability of the main enterprise internationalization theories to the entry of the multinational corporations into Brazil, throughout five phases of Brazilian economy, from 1850 to nowadays. It seeks to verify the explanation power of each theory over the FDI flows in Brazil. It concludes that there is a contingency relation between the theories and the phases of the economy, and. it shows such relationship in a table. In addition, it concludes that the most powerful theory along the researched period was Dunning`s eclectic paradigm, mainly due to the Localization considerations. Theoretical propositions are put forward as a contribution to future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Discussion opposing the Theory of the Firm to the Theory of Stakeholders are contemporaneous and polemical. One focal point of such debates refers to which objective-function companies, should choose, whether that of the shareholders or that of the stakeholders, and whether it is possible to opt for both simultaneously. Several empirical studies. have attempted-to test a possible correlation between both functions, and there has not been any consensus-so far. The objective of the present research is to examine a gap in such discussions: is there (or not) a subordination of the stakeholders` objective-function to that of the shareholders? The research is empirical,and analytical and employs quantitative methods. Hypotheses were tested and data analyzed by using non-parametrical (chi-square test) and parametrical procedures (frequency. correlation `coefficient). Secondary data was collected from he Economitica database and from the Brazilian Institute of Social and-Economic Analyses (IBASE) website, relative to public companies that have published their Social Balance Statements following the IBASE model from 1999 to 2006, whose sample amounted to 65 companies; In order to assess the objective-function of shareholders a proxy was created based on the following three indices: ROE (return on equity), EnterpriseValue and Tobin`s Q. In order to assess the objective-function of stakeholders a proxy was created by employing the following IBASE social balance indices: internal ones (ISI), external ones (ISE), and environmental ones (IAM). The results have shown no evidence of subordination of stakeholders` objective-function to that of the shareholders in analyzed companies, negating initial expectations and calling for deeper investigation of results. Its main conclusion, which states that the attempted subordination does not take place, is limited to the sample herein investigated and calls for ongoing research aiming at improvements which may lead to sample enlargement and, as a consequence, may make feasible the application of other statistical techniques which may yield a more thorough, analysis of the studied phenomehon.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We conducted two psychophysical experiments to investigate the relationship between processing mechanisms for exocentric distance and direction. In the first experiment, the task was to discriminate exocentric distances. In the second one, the task was to discriminate exocentric directions. The individual effects of distance and direction on each task were dissociated by analyzing their corresponding psychophysical functions. Under stereoscopicviewing conditions, distancejudgments of excentric intervals were not affected by exocentric direction. However, directionjudgments were influenced by the distance between the pair of stimuli. Therefore, the mechanism processing exocentric direction is dependent on exocentric distance, but the mechanism processing exocentric distance does not require exocentric: direction measures. As a result, we suggest that exocentric distance and direction are hierarchically processed, with distance preceding direction. Alternatively, and more probably, a necessary condition for processing the exocentric direction between two stimuli may be to know the location of each of them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This note is motivated from some recent papers treating the problem of the existence of a solution for abstract differential equations with fractional derivatives. We show that the existence results in [Agarwal et al. (2009) [1], Belmekki and Benchohra (2010) [2], Darwish et al. (2009) [3], Hu et al. (2009) [4], Mophou and N`Guerekata (2009) [6,7], Mophou (2010) [8,9], Muslim (2009) [10], Pandey et al. (2009) [11], Rashid and El-Qaderi (2009) [12] and Tai and Wang (2009) [13]] are incorrect since the considered variation of constant formulas is not appropriate. In this note, we also consider a different approach to treat a general class of abstract fractional differential equations. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A long-standing challenge of content-based image retrieval (CBIR) systems is the definition of a suitable distance function to measure the similarity between images in an application context which complies with the human perception of similarity. In this paper, we present a new family of distance functions, called attribute concurrence influence distances (AID), which serve to retrieve images by similarity. These distances address an important aspect of the psychophysical notion of similarity in comparisons of images: the effect of concurrent variations in the values of different image attributes. The AID functions allow for comparisons of feature vectors by choosing one of two parameterized expressions: one targeting weak attribute concurrence influence and the other for strong concurrence influence. This paper presents the mathematical definition and implementation of the AID family for a two-dimensional feature space and its extension to any dimension. The composition of the AID family with L (p) distance family is considered to propose a procedure to determine the best distance for a specific application. Experimental results involving several sets of medical images demonstrate that, taking as reference the perception of the specialist in the field (radiologist), the AID functions perform better than the general distance functions commonly used in CBIR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Attention deficit hyperactivity disorder (ADHD) is a clinically significant disorder in adulthood, but current diagnostic criteria and instruments do not seem to adequately capture the complexity of the disorder in this developmental phase. Accordingly, there are limited data on the proportion of adults affected by the disorder, specially in developing countries. Method: We assessed a representative household sample of the Brazilian population for ADHD with the Adult ADHD Self-report Scale (ASRS) Screener, and evaluated the instrument according to the Rasch model of item response theory. Results: The sample was comprised by 3007 individuals, and the overal prevalence of positive screeners for ADHD was 5.8% [95% confidence interval (CI), 4.8-7.0]. Rasch analyses revealed the misfitt of the overall sample to expectations of the model. The evaluation of the sample stratified by age revealed that data for adolescents showed a signficant fittnes to the model expectations, while items completed by adults were not adequated. Conclusions: The lack of fitness to the model for adult respondents challenges the possibility of a linear transformation of the ordinal data into interval measures and the utilization of parametric analyses of data. This result suggests that diagnostic criteria and instruments for adult ADHD must take into account a developmental perspective. Moreover, it calls for further evaluation of currently employed research methods in light of modern theories of psychometrics. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To evaluate the effect of light guide distance and the different photoactivation methods on the degree of conversion (DC) and microleakage of a composite. Methods and Materials: Three photoactivation protocols (600mW/cm(2) x 40 seconds; 400 mW/cm(2) x 60 seconds or 200 mW/cm(2) x 20 seconds, followed by 500 mW/cm(2) X 40 seconds) and three distances from the light source (0, 3 or 7 mm) were tested. Cylindrical specimens (5 nun diameter; 2 mm tall; n=3) were prepared for the DC test (FT-Raman). Class V cavities were made in 90 bovine incisors to conduct the microleakage test. The specimens were conditioned for 15 seconds with phosphoric acid (37%), followed by application of the adhesive system Prime & Bond NT (Dentsply/Caulk). The preparations were restored in bulk. The specimens were stored for 24 hours in distilled water (37 degrees C) before being submitted to the silvernitrate microleakage protocol. The restorations were sectioned and analyzed under 25x magnification. Results: Statistical analyses (two-way ANOVAs and Tukey test, alpha=0.05) found significance only for the factor distance (p=0.015) at the top of the composite for the DC test. Conversion was statistically lower for the 7 mm groups compared to the 0 and 3 mm groups, which were equivalent to each other. At the bottom of the specimens, none of the factors or interactions was significant (p<0.05). The Kruskal-Wallis test showed that, in general, the soft-start method led to lower microleakage scores when compared to the continuous modes, mainly when associated with a distancing of 7 mm (p<0.01). With the exception of specimens irradiated with 400mW/cm(2) that did not demonstrate variations on scores for the distances tested, higher microleakage was observed for shorter distances from the light source. Conclusions: Soft-start methods may reduce microleakage when the light guide distancing provides a low level of irradiance, which also causes a discrete reduction in the DC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: The aim of this study was to assess the influence of irradiation distance and the use of cooling in the Er:YAG laser efficacy in preventing enamel demineralization. Methods: 84 enamel blocks were randomly assigned to seven groups (n = 12): G1: control group - no treatment, G2-G7: experimental groups treated with Er:YAG laser (80 mJ/2 Hz) at different irradiation distances with or without cooling: G2: 4 mm/2 mL; G3: 4 mm/no cooling; G4: 8 mm/2 mL; G5: 8 mm/no cooling; G6: 16 mm/2 mL; G7: 16 mm/no cooling. The samples were submitted to an in vitro pH cycles for 14 days. Next, the specimens were sectioned in sections of 80-100 mu m in thickness and the demineralization patterns of prepared slices were assessed using a polarized light microscope. Three samples from each group were analyzed with scanning electronic microscopy. Analysis of variance and the Fisher test were performed for the statistical analysis of the data obtained from the caries-lesion-depth measurements (CLDM) (alpha = 5%). Results: The control group (CLDM = 0.67 mm) was statistically different from group 2 (CLDM = 0.42 mm), which presented a smaller lesion depth, and group 6 (0.91 mm), which presented a greater lesion depth. The results of groups 3 (CLDM = 0.74 mm), 4 (CLDM = 0.70 mm), 5 (CLDM = 0.67 mm) and 7 (CLDM = 0.89 mm) presented statistical similarity. The scanning electronic microscopy analysis showed ablation areas in the samples from groups 4, 5, 6 and 7, and a slightly demineralized area in group 2. Conclusions: It was possible to conclude that Er:YAG laser was efficient in preventing enamel demineralization at a 4-mm irradiation distance using cooling. (C) 2010 Elsevier Ltd. All rights reserved.