955 resultados para arguments by definition
Resumo:
O ruído é, por definição, um som desagradável ou indesejável que perturba o ambiente, contribuindo para o mal-estar físico e psíquico, podendo pôr em causa a saúde do ser humano. As situações de exposição nos espaços de repouso, lazer e trabalho a valores sonoros elevados, especialmente nos meios urbanos e suburbanos, têm-se multiplicado. Por outro lado, o desenvolvimento cultural dos cidadãos, cada vez mais cientes do direito à qualidade de vida, tem originado um aumento das exigências de conforto, influenciado directamente pela qualidade dos edifícios que habitam. Através da publicação do Regulamento Geral Sobre o Ruído – RGR – (aprovado pelo Decreto Lei nº251/87 de 24 de Junho) bem como o Regulamento dos Requisitos Acústicos dos Edifícios – RRAE - (aprovado pelo Decreto Lei nº129/02 de 11 de Maio), com as devidas alterações no Decreto-lei nº96/08, de 9 de Junho, foi permitido relacionar um conjunto de disposições normativas e legais, permitindo o estabelecimento de condições para a verificação das exigências fundamentais associadas ao conforto acústico dos edifícios (integração urbanística, isolamento sonoro a sons aéreos, isolamento a sons de percussão, exposição ao ruído durante o trabalho), justificando assim a necessidade da existência de regulamentação pelo facto da saúde dos indivíduos também se encontrar relacionada ao conforto acústico. Na presente dissertação, pretende-se dar um contributo para a elaboração da avaliação e certificação do comportamento acústico de edifícios de habitação através da definição dos principais aspectos a avaliar neste processo. Para tal, foi retratado o actual estado da arte e foi feito um levantamento das metodologias utilizadas pelas entidades acreditadas para os ensaios de acústica, tendo em conta a normalização e regulamentação existente, de maneira a obter o panorama destes diagnósticos de conforto acústico para a verificação da sua conformidade com as exigências regulamentares.
Resumo:
Thesis submitted in the fulfilment of the requirements for the Degree of Master in Electronic and Telecomunications Engineering
Resumo:
Wind resource evaluation in two sites located in Portugal was performed using the mesoscale modelling system Weather Research and Forecasting (WRF) and the wind resource analysis tool commonly used within the wind power industry, the Wind Atlas Analysis and Application Program (WAsP) microscale model. Wind measurement campaigns were conducted in the selected sites, allowing for a comparison between in situ measurements and simulated wind, in terms of flow characteristics and energy yields estimates. Three different methodologies were tested, aiming to provide an overview of the benefits and limitations of these methodologies for wind resource estimation. In the first methodology the mesoscale model acts like “virtual” wind measuring stations, where wind data was computed by WRF for both sites and inserted directly as input in WAsP. In the second approach, the same procedure was followed but here the terrain influences induced by the mesoscale model low resolution terrain data were removed from the simulated wind data. In the third methodology, the simulated wind data is extracted at the top of the planetary boundary layer height for both sites, aiming to assess if the use of geostrophic winds (which, by definition, are not influenced by the local terrain) can bring any improvement in the models performance. The obtained results for the abovementioned methodologies were compared with those resulting from in situ measurements, in terms of mean wind speed, Weibull probability density function parameters and production estimates, considering the installation of one wind turbine in each site. Results showed that the second tested approach is the one that produces values closest to the measured ones, and fairly acceptable deviations were found using this coupling technique in terms of estimated annual production. However, mesoscale output should not be used directly in wind farm sitting projects, mainly due to the mesoscale model terrain data poor resolution. Instead, the use of mesoscale output in microscale models should be seen as a valid alternative to in situ data mainly for preliminary wind resource assessments, although the application of mesoscale and microscale coupling in areas with complex topography should be done with extreme caution.
Resumo:
Gestão de Energia e Sustentabilidade são os pilares em que assenta esta dissertação, realizada no âmbito da Energia Elétrica em Ambiente Industrial. Neste sentido, deve entender-se “Gestão de Energia” como o planeamento de operações nas unidades de produção e de consumo. Tem como objetivos a conservação dos recursos, proteção climática e redução de custos, para que seja possível o acesso permanente à energia de que necessitamos. “Sustentabilidade”, por definição, é uma caraterística ou condição de um processo ou de um sistema que permite a sua permanência, até um determinado nível, por um determinado prazo. A Indústria Têxtil, da região do Vale do Ave, por ser extensa e muito heterogénea, quer na qualidade, quer na dimensão e nos recursos que utiliza, apresentou-se como uma oportunidade para aplicar os conhecimentos adquiridos, nas diversas disciplinas do Mestrado em Engenharia Electrotécnica - Área de Sistemas Eléctricos de Energia e como meio condutor à tentativa de optimização e racionalização do consumo de Energia Elétrica no Sector. O modelo encontrado para dar uma resposta satisfatória ao inicialmente proposto foi a Auditoria Energética. Neste sentido, foram realizadas visitas às instalações industriais, para o levantamento e recolha de informação, que posteriormente viria a ser analisada e, com base na mesma, tentou apresentar-se um conjunto de soluções que visassem a eficiência Energética do sector. Com o decorrer das visitas, surgiu a necessidade de criar um modelo, que servisse de guia da auditoria. Algo que seria um bloco de notas, mas específico, de forma a não perder a informação recolhida. Então foi desenvolvida uma aplicação para IPAD/IPHONE, o que permite também a recolha e armazenamento de fotografias, que aqui tem um papel fundamental. Salienta-se que as expectativas foram ao encontro do esperado, pois o que se encontrou foi um número significativo de empresas a precisar deste contributo. Atuando no “combate” ao consumo desnecessário e ao desperdício, é estar a contribuir ativa e positivamente, no desenvolvimento próspero da temática de Sustentabilidade Energética.
Resumo:
INTRODUCTION: The prevalence and risk factors for rifampin, isoniazid and pyrazinamide hepatotoxicity were evaluated in HIV-infected subjects and controls. METHODS: Patients with tuberculosis (30 HIV positive and 132 HIV negative), aged between 18 and 80 years-old, admitted to hospital in Brazil, from 2005 to 2007, were selected for this investigation. Three definitions of hepatotoxicity were used: I) a 3-fold increase in the lower limit of normal for alanine-aminotransferase (ALT); II) a 3-fold increase in the upper limit of normal (ULN) for ALT, and III) a 3-fold increase in the ULN for ALT plus a 2-fold increase in the ULN of total bilirubin. RESULTS: In groups with and without HIV infection the frequency of hepatotoxicity I was 77% and 46%, respectively (p < 0.01). Using hepatotoxicity II and III definitions no difference was observed in the occurrence of antituberculosis drug-induced hepatitis. Of the 17 patients with hepatotoxicity by definition III, 3 presented no side effects and treatment was well tolerated. In 8 (36.4%) out of 22, symptoms emerged and treatment was suspended. Alcohol abuse was related to hepatotoxicity only for definition I. CONCLUSIONS: Depending on the definition of drug-induced hepatitis, HIV infection may or may not be associated with hepatotoxicity. The impact that minor alterations in the definition had on the results was impressive. No death was related to drug-induced hepatotoxicity. The emergence of new symptoms after initiating antituberculosis therapy could not be attributed to hepatotoxicity in over one third of the cases.
Resumo:
RESUMO:O conceito de doenças raras como entidade própria começou a ser divulgado na comunidade médica no início deste século. A perspectiva de congregar múltiplas patologias, com características diferentes, valorizando a baixa frequência com que ocorrem na população interessou a comunidade científica, famílias, indústria e serviços de saúde. Esperava-se encontrar estratégias para melhorar a qualidade dos cuidados de saúde prestados a estes doentes. Uma vez que a informação científica sobre doenças raras está dispersa por diversas fontes o primeiro grande desafio foi sistematizar de forma a obter o “estado da arte”. A investigação que decorreu entre 2001 e 2010 teve como objectivo principal a caracterização dos doentes e das doenças raras numa população com características restritas mas não fechada como é o caso da ilha de S. Miguel nos Açores. Foram identificados 467 doentes a partir de várias fontes e monitorizado o nascimento de recém-nascidos com doença rara durante 10 anos. A prevalência das doenças raras encontrada na ilha de S. Miguel foi de 0,34% e a inerente à definição de doença rara foi de 6 % a 8 % da população na União Europeia. A diferença encontrada poderá decorrer de se ter sobrestimado o verdadeiro valor da prevalência das doenças raras na União Europeia. A incidência de doenças raras determinada na amostra foi de 0,1% e a taxa de mortalidade por causa específica foi de 0,14‰. O diagnóstico foi confirmado por técnicas laboratoriais de citogenética ou genética molecular em 43% dos doentes da amostra. Não foi identificado nenhum agregado populacional com doença rara para além do já conhecido para a DMJ. A criação de uma metodologia de estudo implicou a construção de um registo de doentes. Para tal foi utilizado o conhecimento adquirido anteriormente sobre uma doença rara que serviu de paradigma: a doença de Machado-Joseph. Na sequência dos resultados obtidos foi considerado útil a introdução de variáveis como a figura do cuidador, o cônjuge, o número de filhos do casal, a data da primeira consulta de Genética, o tempo decorrido entre o início dos sintomas e o acesso à consulta de Genética e entre esta actividade e dispor do diagnóstico para melhor compreender o contexto de vida destes doentes na perspectiva de poderem vir a ser incorporadas como indicadores. ----------- ABSTRACT: The concept of rare diseases as a condition began to be disclosed in the medical community at the beginning of this century.The prospect of bringing together multiple pathologies, with different features, enhancing the low frequency with which they occur in the population interested the scientific community, families, industry and health care services. The aim was to find strategies to improve the quality of care provided to these patients. Given that the scientific information on rare diseases is spread out across several sources the first major challenge was to systematize in order to get the "state of the art". The research took place between 2001 and 2010 and had as its main objective the characterization of patients and rare diseases in a population with specific features, but not confined, like in the case of the São Miguel Island in Azores. During 10 years were identified 467 patients from multiple sources and were observed the newborns with rare diseases. Prevalence of rare diseases found in the São Miguel Island was 0,34% compared to the 6% to 8% by definition of rare disease in the population in European Union. This discrepancy may be explained by a likely frequency of overrated rare diseases in European Union. The incidence of rare diseases in the sample was 0,1% and the specific mortality rate was 0,14 ‰. This diagnosis was confirmed by cytogenetic or molecular genetics analysis in 43% of patients in the sample. No population cluster was identified with rare disease besides the already known for Machado-Joseph Disease. The methodology for the study involved the construction of a database of patients. For such purpose it was used previously acquired knowledge on a rare disease paradigm: the Machado-Joseph disease. It was useful to introduce the following variables to properly establish the results: caregiver, spouse, number of children, date of first Genetics appointment, elapsed time between onset of symptoms and access to first appointment as well as this and the final diagnosis to better understand the context of life of these patients in order to incorporate them as rates.
Resumo:
The objective of this master thesis is to evaluate the impact of CSR measures in the financial performance of the European pharmaceutical industry. By definition, CSR measures is quantified as corporate social disclosure considering the published CSR keywords on the annual reports of the selected companies, over four fiscal years (2010-2013). The financial performance of the companies were measured as return on assets (ROA) and Tobin’s Q. In order to defend the hypothesis developed, a multivariate regression is performed. The results obtained show no significant impact on the financial performance of a company nor in the short-time, nor in the long-time. Moreover, by comparison with other studies, it was possible to conclude that the financial performance is differently affected when considering different industries.
Resumo:
BACKGROUND: Lean Production Systems (LPS) have become very popular among manufacturing industries, services and large commercial areas. A LPS must develop and consider a set of work features to bring compatibility with workplace ergonomics, namely at a muscular, cognitive and emotional demands level. OBJECTIVE: Identify the most relevant impacts of the adoption of LPS from the ergonomics point of view and summarizes some possible drawbacks for workplace ergonomics due to a flawed application of the LPS. The impacts identified are focused in four dimensions: work pace, intensity and load; worker motivation, satisfaction and stress; autonomy and participation; and health outcome. This paper also discusses the influence that the work organization model has on workplace ergonomics and on the waste elimination previewed by LPS. METHODS: Literature review focused LPS and its impact on occupational ergonomics conditions, as well as on the Health and Safety of workers. The main focus of this research is on LPS implementations in industrial environments and mainly in manufacturing industry workplaces. This is followed by a discussion including the authors’ experience (and previous research). RESULTS: From the reviewed literature it seems that there is no consensus on how Lean principles affect the workplace ergonomics since most authors found positive (advantages) and negative (disadvantages) impacts. CONCLUSIONS: The negative impacts or disadvantages of LPS implementations reviewed may result from the misunderstanding of the Lean principles. Possibly, they also happen due to partial Lean implementations (when only one or two tools were implemented) that may be effective in a specific work context but not suitable to all possible situations as the principles of LPS should not lead, by definition, to any of the reported drawbacks in terms of workplace ergonomics.
Resumo:
Security risk management is by definition, a subjective and complex exercise and it takes time to perform properly. Human resources are fundamental assets for any organization, and as any other asset, they have inherent vulnerabilities that need to be handled, i.e. managed and assessed. However, the nature that characterize the human behavior and the organizational environment where they develop their work turn these task extremely difficult, hard to accomplish and prone to errors. Assuming security as a cost, organizations are usually focused on the efficiency of the security mechanisms implemented that enable them to protect against external attacks, disregarding the insider risks, which are much more difficult to assess. All these demands an interdisciplinary approach in order to combine technical solutions with psychology approaches in order to understand the organizational staff and detect any changes in their behaviors and characteristics. This paper intends to discuss some methodological challenges to evaluate the insider threats and its impacts, and integrate them in a security risk framework, that was defined according to the security standard ISO/IEC_JTC1, to support the security risk management process.
Resumo:
Double outlet right ventricle (DORV) is a heterogeneous group of abnormal ventriculoarterial connections where, by definition, both great arteries (pulmonary artery and aorta) arise primarily from the morphologically right ventricle. This condition affects 1-1.5% of the patients with congenital heart diseases, with a frequency of 1 in each 10,000 live births. We report the case of an 18-day-old infant with DORV and extremely rare anatomical features, such as anterior and left-sided aorta and subpulmonary ventricular septal defect (VSD). In addition to the anatomic features, the role of the echocardiogram for guiding the diagnosis and the surgical therapy of this congenital heart disease are discussed.
Resumo:
Higher-dimensional automata constitute a very expressive model for concurrent systems. In this paper, we discuss ``topological abstraction" of higher-dimensional automata, i.e., the replacement of HDAs by smaller ones that can be considered equivalent from the point of view of both computer science and topology. By definition, topological abstraction preserves the homotopy type, the trace category, and the homology graph of an HDA. We establish conditions under which cube collapses yield topological abstractions of HDAs.
Resumo:
The main object of the present paper consists in giving formulas and methods which enable us to determine the minimum number of repetitions or of individuals necessary to garantee some extent the success of an experiment. The theoretical basis of all processes consists essentially in the following. Knowing the frequency of the desired p and of the non desired ovents q we may calculate the frequency of all possi- ble combinations, to be expected in n repetitions, by expanding the binomium (p-+q)n. Determining which of these combinations we want to avoid we calculate their total frequency, selecting the value of the exponent n of the binomium in such a way that this total frequency is equal or smaller than the accepted limit of precision n/pª{ 1/n1 (q/p)n + 1/(n-1)| (q/p)n-1 + 1/ 2!(n-2)| (q/p)n-2 + 1/3(n-3) (q/p)n-3... < Plim - -(1b) There does not exist an absolute limit of precision since its value depends not only upon psychological factors in our judgement, but is at the same sime a function of the number of repetitions For this reasen y have proposed (1,56) two relative values, one equal to 1-5n as the lowest value of probability and the other equal to 1-10n as the highest value of improbability, leaving between them what may be called the "region of doubt However these formulas cannot be applied in our case since this number n is just the unknown quantity. Thus we have to use, instead of the more exact values of these two formulas, the conventional limits of P.lim equal to 0,05 (Precision 5%), equal to 0,01 (Precision 1%, and to 0,001 (Precision P, 1%). The binominal formula as explained above (cf. formula 1, pg. 85), however is of rather limited applicability owing to the excessive calculus necessary, and we have thus to procure approximations as substitutes. We may use, without loss of precision, the following approximations: a) The normal or Gaussean distribution when the expected frequency p has any value between 0,1 and 0,9, and when n is at least superior to ten. b) The Poisson distribution when the expected frequecy p is smaller than 0,1. Tables V to VII show for some special cases that these approximations are very satisfactory. The praticai solution of the following problems, stated in the introduction can now be given: A) What is the minimum number of repititions necessary in order to avoid that any one of a treatments, varieties etc. may be accidentally always the best, on the best and second best, or the first, second, and third best or finally one of the n beat treatments, varieties etc. Using the first term of the binomium, we have the following equation for n: n = log Riim / log (m:) = log Riim / log.m - log a --------------(5) B) What is the minimun number of individuals necessary in 01der that a ceratin type, expected with the frequency p, may appaer at least in one, two, three or a=m+1 individuals. 1) For p between 0,1 and 0,9 and using the Gaussean approximation we have: on - ó. p (1-p) n - a -1.m b= δ. 1-p /p e c = m/p } -------------------(7) n = b + b² + 4 c/ 2 n´ = 1/p n cor = n + n' ---------- (8) We have to use the correction n' when p has a value between 0,25 and 0,75. The greek letters delta represents in the present esse the unilateral limits of the Gaussean distribution for the three conventional limits of precision : 1,64; 2,33; and 3,09 respectively. h we are only interested in having at least one individual, and m becomes equal to zero, the formula reduces to : c= m/p o para a = 1 a = { b + b²}² = b² = δ2 1- p /p }-----------------(9) n = 1/p n (cor) = n + n´ 2) If p is smaller than 0,1 we may use table 1 in order to find the mean m of a Poisson distribution and determine. n = m: p C) Which is the minimun number of individuals necessary for distinguishing two frequencies p1 and p2? 1) When pl and p2 are values between 0,1 and 0,9 we have: n = { δ p1 ( 1-pi) + p2) / p2 (1 - p2) n= 1/p1-p2 }------------ (13) n (cor) We have again to use the unilateral limits of the Gaussean distribution. The correction n' should be used if at least one of the valors pl or p2 has a value between 0,25 and 0,75. A more complicated formula may be used in cases where whe want to increase the precision : n (p1 - p2) δ { p1 (1- p2 ) / n= m δ = δ p1 ( 1 - p1) + p2 ( 1 - p2) c= m / p1 - p2 n = { b2 + 4 4 c }2 }--------- (14) n = 1/ p1 - p2 2) When both pl and p2 are smaller than 0,1 we determine the quocient (pl-r-p2) and procure the corresponding number m2 of a Poisson distribution in table 2. The value n is found by the equation : n = mg /p2 ------------- (15) D) What is the minimun number necessary for distinguishing three or more frequencies, p2 p1 p3. If the frequecies pl p2 p3 are values between 0,1 e 0,9 we have to solve the individual equations and sue the higest value of n thus determined : n 1.2 = {δ p1 (1 - p1) / p1 - p2 }² = Fiim n 1.2 = { δ p1 ( 1 - p1) + p1 ( 1 - p1) }² } -- (16) Delta represents now the bilateral limits of the : Gaussean distrioution : 1,96-2,58-3,29. 2) No table was prepared for the relatively rare cases of a comparison of threes or more frequencies below 0,1 and in such cases extremely high numbers would be required. E) A process is given which serves to solve two problemr of informatory nature : a) if a special type appears in n individuals with a frequency p(obs), what may be the corresponding ideal value of p(esp), or; b) if we study samples of n in diviuals and expect a certain type with a frequency p(esp) what may be the extreme limits of p(obs) in individual farmlies ? I.) If we are dealing with values between 0,1 and 0,9 we may use table 3. To solve the first question we select the respective horizontal line for p(obs) and determine which column corresponds to our value of n and find the respective value of p(esp) by interpolating between columns. In order to solve the second problem we start with the respective column for p(esp) and find the horizontal line for the given value of n either diretly or by approximation and by interpolation. 2) For frequencies smaller than 0,1 we have to use table 4 and transform the fractions p(esp) and p(obs) in numbers of Poisson series by multiplication with n. Tn order to solve the first broblem, we verify in which line the lower Poisson limit is equal to m(obs) and transform the corresponding value of m into frequecy p(esp) by dividing through n. The observed frequency may thus be a chance deviate of any value between 0,0... and the values given by dividing the value of m in the table by n. In the second case we transform first the expectation p(esp) into a value of m and procure in the horizontal line, corresponding to m(esp) the extreme values om m which than must be transformed, by dividing through n into values of p(obs). F) Partial and progressive tests may be recomended in all cases where there is lack of material or where the loss of time is less importent than the cost of large scale experiments since in many cases the minimun number necessary to garantee the results within the limits of precision is rather large. One should not forget that the minimun number really represents at the same time a maximun number, necessary only if one takes into consideration essentially the disfavorable variations, but smaller numbers may frequently already satisfactory results. For instance, by definition, we know that a frequecy of p means that we expect one individual in every total o(f1-p). If there were no chance variations, this number (1- p) will be suficient. and if there were favorable variations a smaller number still may yield one individual of the desired type. r.nus trusting to luck, one may start the experiment with numbers, smaller than the minimun calculated according to the formulas given above, and increase the total untill the desired result is obtained and this may well b ebefore the "minimum number" is reached. Some concrete examples of this partial or progressive procedure are given from our genetical experiments with maize.
Resumo:
Telecommunications and network technology is now the driving force that ensures continued progress of world civilization. Design of new and expansion of existing network infrastructures requires improving the quality of service(QoS). Modeling probabilistic and time characteristics of telecommunication systems is an integral part of modern algorithms of administration of quality of service. At present, for the assessment of quality parameters except simulation models analytical models in the form of systems and queuing networks are widely used. Because of the limited mathematical tools of models of these classes the corresponding parameter estimation of parameters of quality of service are inadequate by definition. Especially concerning the models of telecommunication systems with packet transmission of multimedia real-time traffic.
Resumo:
Investigación producida a partir de una estancia en Buenos Aires entre los meses de septiembre y octubre del 2006. La construcción del estado nacional argentino en el siglo XIX implicó la definición –por parte de las elites dirigentes - de un pueblo que cumpliera con las expectativas que se esperaban de una joven nación que se encaminaba hacia la civilización y el progreso. El objeto de la investigación que se está llevando a cabo en el marco del doctorado en Historia de América de la Universitat de Barcelona, es el análisis de la población afroargentina de Buenos Aires en las últimas décadas del siglo XIX, un momento en que su presencia e historia estaban siendo negadas de los discursos y de las prácticas, promoviendo su “invisibilización”. Para llevar a cabo esta investigación, se hace fundamental recurrir a fuentes y documentos que deben ser buscados y hallados en los Archivos Generales y Locales, y en Bibliotecas Nacionales y Municipales, sitos en la ciudad de Buenos Aires.
Resumo:
It has been recently found that a number of systems displaying crackling noise also show a remarkable behavior regarding the temporal occurrence of successive events versus their size: a scaling law for the probability distributions of waiting times as a function of a minimum size is fulfilled, signaling the existence on those systems of self-similarity in time-size. This property is also present in some non-crackling systems. Here, the uncommon character of the scaling law is illustrated with simple marked renewal processes, built by definition with no correlations. Whereas processes with a finite mean waiting time do not fulfill a scaling law in general and tend towards a Poisson process in the limit of very high sizes, processes without a finite mean tend to another class of distributions, characterized by double power-law waiting-time densities. This is somehow reminiscent of the generalized central limit theorem. A model with short-range correlations is not able to escape from the attraction of those limit distributions. A discussion on open problems in the modeling of these properties is provided.