30 resultados para Mannheim metric


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work proposes the design, the performance evaluation and a methodology for tuning the initial MFs parameters of output of a function based Takagi-Sugeno-Kang Fuzzy-PI controller to neutralize the pH in a stirred-tank reactor. The controller is designed to perform pH neutralization of industrial plants, mainly in units found in oil refineries where it is strongly required to mitigate uncertainties and nonlinearities. In addition, it adjusts the changes in pH regulating process, avoiding or reducing the need for retuning to maintain the desired performance. Based on the Hammerstein model, the system emulates a real plant that fits the changes in pH neutralization process of avoiding or reducing the need to retune. The controller performance is evaluated by overshoots, stabilization times, indices Integral of the Absolute Error (IAE) and Integral of the Absolute Value of the Error-weighted Time (ITAE), and using a metric developed by that takes into account both the error information and the control signal. The Fuzzy-PI controller is compared with PI and gain schedule PI controllers previously used in the testing plant, whose results can be found in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bayesian networks are powerful tools as they represent probability distributions as graphs. They work with uncertainties of real systems. Since last decade there is a special interest in learning network structures from data. However learning the best network structure is a NP-Hard problem, so many heuristics algorithms to generate network structures from data were created. Many of these algorithms use score metrics to generate the network model. This thesis compare three of most used score metrics. The K-2 algorithm and two pattern benchmarks, ASIA and ALARM, were used to carry out the comparison. Results show that score metrics with hyperparameters that strength the tendency to select simpler network structures are better than score metrics with weaker tendency to select simpler network structures for both metrics (Heckerman-Geiger and modified MDL). Heckerman-Geiger Bayesian score metric works better than MDL with large datasets and MDL works better than Heckerman-Geiger with small datasets. The modified MDL gives similar results to Heckerman-Geiger for large datasets and close results to MDL for small datasets with stronger tendency to select simpler network structures

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The manufacture of prostheses for lower limb amputees (transfemural and transtibial) requires the preparation of a cartridge with appropriate and custom fit to the profile of each patient. The traditional process to the patients, mainly in public hospitals in Brazil, begins with the completion of a form where types of equipment, plugins, measures, levels of amputation etc. are identified. Currently, such work is carried out manually using a common metric tape and caliper of wood to take the measures of the stump, featuring a very rudimentary, and with a high degree of uncertainty geometry of the final product. To address this problem, it was necessary to act in two simultaneously and correlated directions. Originally, it was developed an integrated tool for viewing 3D CAD for transfemoral types of prostheses and transtibial called OrtoCAD I. At the same time, it was necessary to design and build a reader Mechanical equipment (sort of three-dimensional scanner simplified) able to obtain, automatically and with accuracy, the geometric information of either of the stump or the healthy leg. The methodology includes the application of concepts of reverse engineering to computationally generate the representation of the stump and/or the reverse image of the healthy member. The materials used in the manufacturing of prostheses nor always obey to a technical scientific criteria, because, if by one way it meets the criteria of resistance, by the other, it brings serious problems mainly due to excess of weight. This causes to the user various disorders due to lack of conformity. That problem was addressed with the creation of a hybrid composite material for the manufacture of cartridges of prostheses. Using the Reader Fitter and OrtoCAD, the new composite material, which aggregates the mechanical properties of strength and rigidity on important parameters such as low weight and low cost, it can be defined in its better way. Besides, it brings a reduction of up steps in the current processes of manufacturing or even the feasibility of using new processes, in the industries, in order to obtain the prostheses. In this sense, the hybridization of the composite with the combination of natural and synthetic fibers can be a viable solution to the challenges offered above

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The pegmatite rocks in Rio Grande do Norte are responsible for much of the production of industrial minerals like quartz and feldspar. Quartz and feldspar are minerals from pegmatite which may occur in pockets with metric to centimetric dimensions or as millimetric to sub millimetric intergrowths. The correct physical liberation of the mineral of interest, in case of intergrowths, requires an appropriate particle size, acquired by size reduction operations. The method for treating mineral which has a high efficiency fines particles recovery is flotation. The main purpose of the present study is to evaluate the recovery of quartz and potassium feldspar using cationic diamine and quaternary ammonium salt as collectors by means of dissolved air flotation DAF. The tests were performed based on a central composite design 24, by which the influence of process variables was statistically verified: concentration of the quaternary ammonium salt and diamine collectors, pH and conditioning time. The efficiency of flotation was calculated from the removal of turbidity of the solution. Results of maximum flotation efficiency (60%) were found in the level curves, plotted in conditions of low concentrations of collectors (1,0 x 10-5 mol.L-1). These high flotation efficiencies were obtained when operating at pH 4 to 8 with conditioning time ranging from 3 to 5 minutes. Thus, the results showed that the process variables have played important roles in the dissolved air flotation process concerning the flotability of the minerals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent astronomical observations (involving supernovae type Ia, cosmic background radiation anisotropy and galaxy clusters probes) have provided strong evidence that the observed universe is described by an accelerating, flat model whose space-time properties can be represented by the FriedmannRobertsonWalker (FRW) metric. However, the nature of the substance or mechanism behind the current cosmic acceleration remains unknown and its determination constitutes a challenging problem for modern cosmology. In the general relativistic description, an accelerat ing regime is usually obtained by assuming the existence of an exotic energy component endowed with negative pressure, called dark energy, which is usually represented by a cosmological constant ¤ associated to the vacuum energy density. All observational data available so far are in good agreement with the concordance cosmic ¤CDM model. Nevertheless, such models are plagued with several problems thereby inspiring many authors to propose alternative candidates in the relativistic context. In this thesis, a new kind of accelerating flat model with no dark energy and fully dominated by cold dark matter (CDM) is proposed. The number of CDM particles is not conserved and the present accelerating stage is a consequence of the negative pressure describing the irreversible process of gravitational particle creation. In order to have a transition from a decelerating to an accelerating regime at low redshifts, the matter creation rate proposed here depends on 2 parameters (y and ߯): the first one identifies a constant term of the order of H0 and the second one describes a time variation proportional to he Hubble parameter H(t). In this scenario, H0 does not need to be small in order to solve the age problem and the transition happens even if there is no matter creation during the radiation and part of the matter dominated phase (when the ß term is negligible). Like in flat ACDM scenarios, the dimming of distant type Ia supernovae can be fitted with just one free parameter, and the coincidence problem plaguing the models driven by the cosmological constant. ACDM is absent. The limits endowed with with the existence of the quasar APM 08279+5255, located at z = 3:91 and with an estimated ages between 2 and 3 Gyr are also investigated. In the simplest case (ß = 0), the model is compatible with the existence of the quasar for y > 0:56 whether the age of the quasar is 2.0 Gyr. For 3 Gyr the limit derived is y > 0:72. New limits for the formation redshift of the quasar are also established

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a nestedness index that measures the nestedness pattern of bipartite networks, a problem that arises in theoretical ecology. Our measure is derived using the sum of distances of the occupied elements in the adjacency matrix of the network. This index quantifies directly the deviation of a given matrix from the nested pattern. In the most simple case the distance of the matrix element ai,j is di,j = i+j, the Manhattan distance. A generic distance is obtained as di,j = (i¬ + j¬)1/¬. The nestedness índex is defined by = 1 − where is the temperature of the matrix. We construct the temperature index using two benchmarks: the distance of the complete nested matrix that corresponds to zero temperature and the distance of the average random matrix that is defined as temperature one. We discuss an important feature of the problem: matrix occupancy. We address this question using a metric index ¬ that adjusts for matrix occupancy

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we investigate physical problems which present a high degree of complexity using tools and models of Statistical Mechanics. We give a special attention to systems with long-range interactions, such as one-dimensional long-range bondpercolation, complex networks without metric and vehicular traffic. The flux in linear chain (percolation) with bond between first neighbor only happens if pc = 1, but when we consider long-range interactions , the situation is completely different, i.e., the transitions between the percolating phase and non-percolating phase happens for pc < 1. This kind of transition happens even when the system is diluted ( dilution of sites ). Some of these effects are investigated in this work, for example, the extensivity of the system, the relation between critical properties and the dilution, etc. In particular we show that the dilution does not change the universality of the system. In another work, we analyze the implications of using a power law quality distribution for vertices in the growth dynamics of a network studied by Bianconi and Barabási. It incorporates in the preferential attachment the different ability (fitness) of the nodes to compete for links. Finally, we study the vehicular traffic on road networks when it is submitted to an increasing flux of cars. In this way, we develop two models which enable the analysis of the total flux on each road as well as the flux leaving the system and the behavior of the total number of congested roads

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O AVC é uma grande causa de mortalidade e uma das principais causas de incapacidade entre adultos. O presente estudo visa analisar o estado do sono e da utilização de cartilhas educativas em pacientes com AVC. No primeiro estudo foram abordados os fatores associados com os horários de dormir/acordar e no segundo estudo foi analisado o conhecimento e prática quanto às orientações sobre os hábitos de sono e estimulação cognitiva. No estudo 1 foram avaliados 50 pacientes sendo 28 homens, de faixa etária entre 25 e 90 anos que durante uma semana completaram um diário do sono e o registro de atividades através do Social Rhythm Metric (SRM) e do Indice de Nível de Atividades (ALI) e aplicação do questionário de cronotipo (MEQ). Utilizado o teste de correlação de Spearman verificou-se correlação significativa entre os horários de dormir/acordar com cronotipo e entre os horários de dormir/acordar com SRM e o ALI. No segundo estudo foram abordados 40 pacientes com idade média 56,1 ± 11,9 anos, sendo 15 homens e 25 mulheres; como instrumentos foram utilizados National Institute Health Stroke Scale (NIHSS) e em seguida os pacientes observaram cartilhas educativas sobre hábitos de sono e estimulação cognitiva respondendo se conheciam e se praticavam as orientações apresentadas. A análise estatística realizada através do teste de Fisher obteve como resultado, que das 10 orientações apresentadas sobre os hábitos de sono, 6 foram citadas como conhecidas e apenas 4 foram praticadas. Das 6 orientações cognitivas, não houve diferença significativa entre os que conheciam e não conheciam, mas em 5 delas a maior frequência foi dos pacientes que não praticaram. Os resultados dos estudos indicam a importância de avaliar o cronotipo antes do planejamento de reabilitação, e a necessidade de se estimular o ritmo social a fim de contribuir para a melhoria dos padrões de sono de pacientes. Verificou-se também que em relação ao conhecimento e prática de orientações apresentadas muitos pacientes não conheceram ou não praticaram orientações importantes a respeito de hábitos de sono e de estimulação cognitiva, mesmo na fase crônica da patologia, sugerindo que mais políticas de educação em saúde devem ser implementadas com intuito de causar mudança nos hábitos de vida dos pacientes com AVC

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Studies indicate that spinner dolphins use the Baía dos Golfinhos, in Fernando de Noronha Archipelago, for resting, reproduction, parental care and protection against shark attacks. The present study had the purpose of verifying the seasonality of spinner dolphin newborns and calves in relation to the months of the year and the pluviometric seasons (dry and rainy), as well as their interaction with the number and gender of accompanying adults and their positioning in relation to the adults (vertical, horizontal, and depth) in the above mentioned bay. The analysis were made out of photo records of dolphins collected between 2000 and 2006 (seasonality) and between 1995 and 2006 (interaction) both using ad libitum sampling method during free dives. To determine the age category, the reason between the smaller dolphin s total body length and the bigger dolphin s total body length was calculated. The dolphins were then divided into three age groups: adults, newborns and calves. Those with total body length ≥ 170cm were considered adults, newborns up to 105cm, and calves from 106cm to 128cm. In addition, the secondary characters described in literature were used to identify newborns and calves. The adults had dimensions of total lenght ≥170cm , the newborns until 105cm and calves between 106cm and 128cm. I addition, secondary characteristics described in the literature were used to indentify newborns and calves. The number of spinner dolphin newborns was greater in the month of April and higher during the rainy season. Throughout the months and pluviometric seasons (pluviometer/pluvial metric), the number of calves did not have a significant difference. Concerning to the presence of newborns and calves age groups at Baía dos Golfinhos, there was not a significant difference. It was possible to identify the gender of the escorting adults as (42), 95.24% being females and 4.76% males. Newborns were more frequently seen in the company of two adults, whereas calves were more often accompanied by more than two adults However, there was not a significant difference for the newborns, whereas for the calves there was a significant difference for those classified as loners and those accompanied by more than two adults. When in vertical positioning, the newborns and calves were more frequently observed in inferior position with some difference demonstrated between some of that. While in horizontal positioning both age groups were more often seen in posterior position, also with differences between them. In the depth perspective newborns and calves were positioned anterior, however with significant difference for the calves. The occurrence of a peak of newborns in the months of April may indicate the existence of a birth seasonality pattern for the beginning of the rainy season, with births scattered throughout the year. The results for the positions and escorting of newborns and calves are related to protection and suckling. These conditions reinforce the importance of the area when it comes to the care for offspring, which calls for the creation of conservation rules to the area, especially during those months with greater occurrence of newborns

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this dissertation we present some generalizations for the concept of distance by using more general value spaces, such as: fuzzy metrics, probabilistic metrics and generalized metrics. We show how such generalizations may be useful due to the possibility that the distance between two objects could carry more information about the objects than in the case where the distance is represented just by a real number. Also in this thesis we propose another generalization of distance which encompasses the notion of interval metric and generates a topology in a natural way. Several properties of this generalization are investigated, and its links with other existing generalizations

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work proposes a model based approach for pointcut management in the presence of evolution in aspect oriented systems. The proposed approach, called conceptual visions based pointcuts, is motivated by the observation of the shortcomings in traditional approaches pointcuts definition, which generally refer directly to software structure and/or behavior, thereby creating a strong coupling between pointcut definition and the base code. This coupling causes the problem known as pointcut fragility problem and hinders the evolution of aspect-oriented systems. This problem occurs when all the pointcuts of each aspect should be reviewed due to any software changes/evolution, to ensure that they remain valid even after the changes made in the software. Our approach is focused on the pointcuts definition based on a conceptual model, which has definitions of the system's structure in a more abstract level. The conceptual model consists of classifications (called conceptual views) on entities of the business model elements based on common characteristics, and relationships between these views. Thus the pointcuts definitions are created based on the conceptual model rather than directly referencing the base model. Moreover, the conceptual model contains a set of relationships that allows it to be automatically verified if the classifications in the conceptual model remain valid even after a software change. To this end, all the development using the conceptual views based pointcuts approach is supported by a conceptual framework called CrossMDA2 and a development process based on MDA, both also proposed in this work. As proof of concept, we present two versions of a case study, setting up a scenario of evolution that shows how the use of conceptual visions based pointcuts helps detecting and minimizing the pointcuts fragility. For the proposal evaluation the Goal/Question/Metric (GQM) technique is used together with metrics for efficiency analysis in the pointcuts definition

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this dissertation, after a brief review on the Einstein s General Relativity Theory and its application to the Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmological models, we present and discuss the alternative theories of gravity dubbed f(R) gravity. These theories come about when one substitute in the Einstein-Hilbert action the Ricci curvature R by some well behaved nonlinear function f(R). They provide an alternative way to explain the current cosmic acceleration with no need of invoking neither a dark energy component, nor the existence of extra spatial dimensions. In dealing with f(R) gravity, two different variational approaches may be followed, namely the metric and the Palatini formalisms, which lead to very different equations of motion. We briefly describe the metric formalism and then concentrate on the Palatini variational approach to the gravity action. We make a systematic and detailed derivation of the field equations for Palatini f(R) gravity, which generalize the Einsteins equations of General Relativity, and obtain also the generalized Friedmann equations, which can be used for cosmological tests. As an example, using recent compilations of type Ia Supernovae observations, we show how the f(R) = R − fi/Rn class of gravity theories explain the recent observed acceleration of the universe by placing reasonable constraints on the free parameters fi and n. We also examine the question as to whether Palatini f(R) gravity theories permit space-times in which causality, a fundamental issue in any physical theory [22], is violated. As is well known, in General Relativity there are solutions to the viii field equations that have causal anomalies in the form of closed time-like curves, the renowned Gödel model being the best known example of such a solution. Here we show that every perfect-fluid Gödel-type solution of Palatini f(R) gravity with density and pressure p that satisfy the weak energy condition + p 0 is necessarily isometric to the Gödel geometry, demonstrating, therefore, that these theories present causal anomalies in the form of closed time-like curves. This result extends a theorem on Gödel-type models to the framework of Palatini f(R) gravity theory. We derive an expression for a critical radius rc (beyond which causality is violated) for an arbitrary Palatini f(R) theory. The expression makes apparent that the violation of causality depends on the form of f(R) and on the matter content components. We concretely examine the Gödel-type perfect-fluid solutions in the f(R) = R−fi/Rn class of Palatini gravity theories, and show that for positive matter density and for fi and n in the range permitted by the observations, these theories do not admit the Gödel geometry as a perfect-fluid solution of its field equations. In this sense, f(R) gravity theory remedies the causal pathology in the form of closed timelike curves which is allowed in General Relativity. We also examine the violation of causality of Gödel-type by considering a single scalar field as the matter content. For this source, we show that Palatini f(R) gravity gives rise to a unique Gödeltype solution with no violation of causality. Finally, we show that by combining a perfect fluid plus a scalar field as sources of Gödel-type geometries, we obtain both solutions in the form of closed time-like curves, as well as solutions with no violation of causality

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently the interest in large-scale systems with a high degree of complexity has been much discussed in the scientific community in various areas of knowledge. As an example, the Internet, protein interaction, collaboration of film actors, among others. To better understand the behavior of interconnected systems, several models in the area of complex networks have been proposed. Barabási and Albert proposed a model in which the connection between the constituents of the system could dynamically and which favors older sites, reproducing a characteristic behavior in some real systems: connectivity distribution of scale invariant. However, this model neglects two factors, among others, observed in real systems: homophily and metrics. Given the importance of these two terms in the global behavior of networks, we propose in this dissertation study a dynamic model of preferential binding to three essential factors that are responsible for competition for links: (i) connectivity (the more connected sites are privileged in the choice of links) (ii) homophily (similar connections between sites are more attractive), (iii) metric (the link is favored by the proximity of the sites). Within this proposal, we analyze the behavior of the distribution of connectivity and dynamic evolution of the network are affected by the metric by A parameter that controls the importance of distance in the preferential binding) and homophily by (characteristic intrinsic site). We realized that the increased importance as the distance in the preferred connection, the connections between sites and become local connectivity distribution is characterized by a typical range. In parallel, we adjust the curves of connectivity distribution, for different values of A, the equation P(k) = P0e

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Geological and geophysical studies (resistivity, self potential and VLF) were undertaken in the Tararaca and Santa Rita farms, respectively close to the Santo Antônio and Santa Cruz villages, eastern Rio Grande do Norte State, NE Brazil. Their aim was to characterize water acummulation structures in crystalline rocks. Based on geological and geophysical data, two models were characterized, the fracture-stream and the eluvio-alluvial through, in part already described in the literature. In the Tararaca Farm, a water well was located in a NW-trending streamlet; surrounding outcrops display fractures with the same orientation. Apparent resistivity sections, accross the stream channel, confirm fracturing at depth. The VLF profiles systematically display an alignment of equivalent current density anomalies, coinciding with the stream. Based on such data, the classical fracture-stream model seems to be well characterized at this place. In the Santa Rita Farm, a NE-trending stream display a metric-thick eluvioregolith-alluvial cover. The outcropping bedrock do not present fractures paralell to the stream direction, although the latter coincides with the trend of the gneiss foliation, which dips to the south. Geophysical data confirm the absence of a fracture zone at this place, but delineate the borders of a through-shaped structure filled with sediments (alluvium and regolith). The southern border of this structure dips steeper compared to the northern one. This water acummulation structure corresponds to an alternative model as regards to the classical fracture-stream, being named as the eluvio-alluvial trough. Its local controls are the drainage and relief, coupled with the bedrock weathering preferentially following foliation planes, generating the asymmetry of the through