955 resultados para Mean-value solution
Resumo:
Value and reasons for action are often cited by rationalists and moral realists as providing a desire-independent foundation for normativity. Those maintaining instead that normativity is dependent upon motivation often deny that anything called '"value" or "reasons" exists. According to the interest-relational theory, something has value relative to some perspective of desire just in case it satisfies those desires, and a consideration is a reason for some action just in case it indicates that something of value will be accomplished by that action. Value judgements therefore describe real properties of objects and actions, but have no normative significance independent of desires. It is argued that only the interest-relational theory can account for the practical significance of value and reasons for action. Against the Kantian hypothesis of prescriptive rational norms, I attack the alleged instrumental norm or hypothetical imperative, showing that the normative force for taking the means to our ends is explicable in terms of our desire for the end, and not as a command of reason. This analysis also provides a solution to the puzzle concerning the connection between value judgement and motivation. While it is possible to hold value judgements without motivation, the connection is more than accidental. This is because value judgements are usually but not always made from the perspective of desires that actually motivate the speaker. In the normal case judgement entails motivation. But often we conversationally borrow external perspectives of desire, and subsequent judgements do not entail motivation. This analysis drives a critique of a common practice as a misuse of normative language. The "absolutist" attempts to use and, as philosopher, analyze normative language in such a way as to justify the imposition of certain interests over others. But these uses and analyses are incoherent - in denying relativity to particular desires they conflict with the actual meaning of these utterances, which is always indexed to some particular set of desires.
Resumo:
The wide-spread impact of exotic fishes especially Oreochromis niloticus and Lates niloticus together with over fishing in the Victoria and Kyoga lake basins during the 1950s and 1960s, caused endemic species such as the previously most important Oreochromis esculentus to become virtually extinct in the two lakes by the 1970s. Based on reports of presence of this native species in some satellite lakes within the two lake basins, a set of satellite lakes in the Victoria basin (Nabugabo lakes: Kayanja and Kayugi), were sampled between 1997-2002 with an objective of assessing their value as conservation sites for O. esculentus. Other satellite lakes (Mburo and Kachera) also in the Victoria basin, and Lemwa, Kawi and Nabisojjo, in the Kyoga basin, were sampled for comparison. Among the Nabugabo lakes, O. esculentus was more abundant in Lake Kayanja (20.1 %) ofthe total fish catch by weight compared to Lake Kayugi (1.4 %). The largest fish examined (38.7 cm TL) was caught in Lake Kayugi, (also the largest in all satellite lakes sampled), while the smallest (6.6 cm TL) was from Lake Kayanja. Fish from Lake Kayugi had a higher condition factor K (1.89± 0.02) than that from Lake Kayanja (1.53±0.0I), which was the second highest (compared with other satellite lakes) to Lake Kawi (1.92±0.2). Diatoms, especially Aulacoseira, which were previously known to be the best food for O. esculentus in Lake Victoria were mostly encountered (93.2 %) in fish stomachs from Lake Kayugi. In Lake Kayanja the dominant food item was the blue green algae (Planktolyngbya) while Microcystis was the most abundant diet item in fish from other satellite lakes. There were more male than female fish (ratio 1:0.91 and 1: 0.79 in lakes Kayugi and Kayanja respectively). This is comparable to the situation in Lake Victoria before the species got depleted. The highest mean fecundity was (771±218 eggs) recorded in Lake Kayugi compared to Lake Kayanja (399±143). Based on the results from Lake Kayugi, where diatoms dominated the diet of O. esculentus and where the largest, most fecund and healthy fish were found, this lake would be a most valuable site for the conservation of O. esculentus and the best source of fish, for restocking and for captive-propagation. This lake is therefore recommended for protection from over exploitation and misuse.
Resumo:
A significant part of the life of a mechanical component occurs, the crack propagation stage in fatigue. Currently, it is had several mathematical models to describe the crack growth behavior. These models are classified into two categories in terms of stress range amplitude: constant and variable. In general, these propagation models are formulated as an initial value problem, and from this, the evolution curve of the crack is obtained by applying a numerical method. This dissertation presented the application of the methodology "Fast Bounds Crack" for the establishment of upper and lower bounds functions for model evolution of crack size. The performance of this methodology was evaluated by the relative deviation and computational times, in relation to approximate numerical solutions obtained by the Runge-Kutta method of 4th explicit order (RK4). Has been reached a maximum relative deviation of 5.92% and the computational time was, for examples solved, 130,000 times more higher than achieved by the method RK4. Was performed yet an Engineering application in order to obtain an approximate numerical solution, from the arithmetic mean of the upper and lower bounds obtained in the methodology applied in this work, when you don’t know the law of evolution. The maximum relative error found in this application was 2.08% which proves the efficiency of the methodology "Fast Bounds Crack".
Resumo:
A systematic diagrammatic expansion for Gutzwiller wavefunctions (DE-GWFs) proposed very recently is used for the description of the superconducting (SC) ground state in the two-dimensional square-lattice t-J model with the hopping electron amplitudes t (and t') between nearest (and next-nearest) neighbors. For the example of the SC state analysis we provide a detailed comparison of the method's results with those of other approaches. Namely, (i) the truncated DE-GWF method reproduces the variational Monte Carlo (VMC) results and (ii) in the lowest (zeroth) order of the expansion the method can reproduce the analytical results of the standard Gutzwiller approximation (GA), as well as of the recently proposed 'grand-canonical Gutzwiller approximation' (called either GCGA or SGA). We obtain important features of the SC state. First, the SC gap at the Fermi surface resembles a d(x2-y2) wave only for optimally and overdoped systems, being diminished in the antinodal regions for the underdoped case in a qualitative agreement with experiment. Corrections to the gap structure are shown to arise from the longer range of the real-space pairing. Second, the nodal Fermi velocity is almost constant as a function of doping and agrees semi-quantitatively with experimental results. Third, we compare the
Resumo:
Most economic transactions nowadays are due to the effective exchange of information in which digital resources play a huge role. New actors are coming into existence all the time, so organizations are facing difficulties in keeping their current customers and attracting new customer segments and markets. Companies are trying to find the key to their success and creating superior customer value seems to be one solution. Digital technologies can be used to deliver value to customers in ways that extend customers’ normal conscious experiences in the context of time and space. By creating customer value, companies can gain the increased loyalty of existing customers and better ways to serve new customers effectively. Based on these assumptions, the objective of this study was to design a framework to enable organizations to create customer value in digital business. The research was carried out as a literature review and an empirical study, which consisted of a web-based survey and semi-structured interviews. The data from the empirical study was analyzed as mixed research with qualitative and quantitative methods. These methods were used since the object of the study was to gain deeper understanding about an existing phenomena. Therefore, the study used statistical procedures and value creation is described as a phenomenon. The framework was designed first based on the literature and updated based on the findings from the empirical study. As a result, relationship, understanding the customer, focusing on the core product or service, the product or service quality, incremental innovations, service range, corporate identity, and networks were chosen as the top elements of customer value creation. Measures for these elements were identified. With the measures, companies can manage the elements in value creation when dealing with present and future customers and also manage the operations of the company. In conclusion, creating customer value requires understanding the customer and a lot of information sharing, which can be eased by digital resources. Understanding the customer helps to produce products and services that fulfill customers’ needs and desires. This could result in increased sales and make it easier to establish efficient processes.
Resumo:
Anchoíta (Engraulis anchoita) é uma espécie pelágica encontrada no Sudoeste do Oceano Atlântico. Estima-se que 135000 toneladas/ano desse peixe possam ser exploradas ao longo do litoral sul do Brasil. Entretanto, os recursos pesqueiros do país são ainda inexplorados, o que torna esta matéria prima candidata em potencial para a fabricação de novos produtos a base desse pescado. Com o apoio de programas governamentais sociais, a tendência para o Brasil é para o desenvolvimento de produtos de anchoíta alternativos e que sejam capazes de suprir as necessidades específicas de cada grupo de consumo alvo. Dentro desse cenário, um estudo de novos produtos de pescado frente ao mercado se faz necessário, na tentativa de compreender as variáveis influentes do setor. Para tanto, na presente tese teve objetivou-se desenvolver produtos à base de anchoíta e estudar o comportamento do mercado consumidor frente a esses novos produtos de pescado. Um total de seis artigos foi gerado. O primeiro artigo intitulou-se: “Potencial de inserção de empanados de pescado na merenda escolar mediante determinantes individuais”. Neste objetivou-se detectar os determinantes individuais do consumo de pescado com adolescentes em idade de 12 a 17 anos, visando à inserção de empanados de pescado na merenda escolar. Foi verificado que as variáveis que melhor discriminaram a frequência de consumo foram “gosta de pescado” e “grau de escolaridade dos pais”. Os resultados indicaram um potencial de consumo de empanados de pescado por adolescentes, associado à necessidade de educação alimentar. O segundo artigo “Elaboração de hambúrguer a partir de base proteica de anchoíta (Engraulis anchoita)” no qual se objetivou avaliar o efeito de diferentes combinações de solventes para a obtenção de base proteica de anchoíta visando à elaboração de hambúrguer de pescado. As lavagens com ácido fosfórico e mais dois ciclos de água foram as que apresentaram os melhores valores para a obtenção da base proteica, baseando-se na remoção de nitrogenados e respostas sensoriais. No terceiro artigo “Aceitação de empanados de pescado (Engraulis anchoita) na merenda escolar no extremo sul do Brasil” o objetivo foi avaliar a aceitação de empanados de pescado (Engraulis anchoita) com alunos (n = 830) da rede pública de ensino, em idades entre 5 e 18 anos, de duas cidades do estado do Rio Grande do Sul, Brasil. Os resultados indicaram relação inversa entre a aceitação de empanados de pescado e o aumento da idade das crianças. O quarto artigo estudou “Razões subjacentes ao baixo consumo de pescado pelo consumidor brasileiro.” Neste objetivou-se investigar o comportamento referente ao consumo de pescado de uma população com baixo consumo de pescado (Brasil), aplicando a Teoria do Comportamento Planejado (TCP). Os resultados indicaram que tanto a intenção como a atitude provou serem determinantes significativos na frequência de comer pescado, sendo a atitude inversamente correlacionada com o consumo de pescado. Hábito apareceu como uma importante variável discriminante para o consumo de pescado. O quinto artigo intitula-se “Modelagem de equações estruturais e associação de palavras como ferramentas para melhor compreensão do baixo consumo de pescado”. O objetivo foi desenvolver um modelo e explicar o conjunto das relações entre os construtos do consumo de pescado em uma população com baixo consumo de pescado (Brasil) através da aplicação da TCP e pelo Questionário das Escolhas dos Alimentos. Além disso, a percepção cognitiva de produtos de pescado (Engraulis anchoíta) foi avaliada pela mesma população. Os resultados indicaram um bom ajuste para o modelo proposto e mostraram que os construtos “saúde” e “controle de peso” são bons preditores da intenção. A técnica associação de palavras provou ser um método útil para a análise de percepção de um novo produto de pescado, além de ajudar a explicar os resultados obtidos pelas equações estruturais. O sexto e último artigo “Percepção de saudável em produtos de pescado em uma população com alto consumo de pescado. Uma investigação por eye tracking” em que se objetivou explorar o uso do método eye tracking para estudar a percepção de saudável em diferentes produtos de pescado. Dois pontos importantes podem ser salientados como influentes na percepção de saudável: produtos de pescado processados e alimentos fritos.
Resumo:
Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are possible. In particular, the transformation can be carried out by means of simple statistical techniques such as low-pass filters based on the running mean and the standard deviation, and the fitting procedure is a stationary one with a few degrees of freedom and is easy to implement and control. An open-source MAT-LAB toolbox has been developed to cover this methodology, which is available at https://github.com/menta78/tsEva/(Mentaschi et al., 2016).
Resumo:
We present a general multistage stochastic mixed 0-1 problem where the uncertainty appears everywhere in the objective function, constraints matrix and right-hand-side. The uncertainty is represented by a scenario tree that can be a symmetric or a nonsymmetric one. The stochastic model is converted in a mixed 0-1 Deterministic Equivalent Model in compact representation. Due to the difficulty of the problem, the solution offered by the stochastic model has been traditionally obtained by optimizing the objective function expected value (i.e., mean) over the scenarios, usually, along a time horizon. This approach (so named risk neutral) has the inconvenience of providing a solution that ignores the variance of the objective value of the scenarios and, so, the occurrence of scenarios with an objective value below the expected one. Alternatively, we present several approaches for risk averse management, namely, a scenario immunization strategy, the optimization of the well known Value-at-Risk (VaR) and several variants of the Conditional Value-at-Risk strategies, the optimization of the expected mean minus the weighted probability of having a "bad" scenario to occur for the given solution provided by the model, the optimization of the objective function expected value subject to stochastic dominance constraints (SDC) for a set of profiles given by the pairs of threshold objective values and either bounds on the probability of not reaching the thresholds or the expected shortfall over them, and the optimization of a mixture of the VaR and SDC strategies.
Resumo:
Tillage systems strongly affect nutrient transformations and plant availability. The objective of this study was to assess the nitrate dynamic in soil solution in different tillage systems with use of plant cocktail as green manure in fertilized melon (Cucumis melon) in Brazilian semi-arid. The treatments were arranged in four blocks in a split-plot design and included three types of cover crops and two tillage systems, conventional tillage (CT) and no-till (NT). The data showed no strong effect of plant cocktails composition on NO3-N dynamic in the soil. Mean concentration of NO3-N ranged from 19.45 mg L-1 at 15 cm to 60.16 mg L-1 at 50 cm soil depth, indicating high leachability. No significant differences were observed between NT and CT treatments for 15 cm depth. The high soil moisture content at ~ 30 cm depth concentrated high NO3-N in all treatments, mean of 54.27 mg L-1 to NT and 54.62 mg L-1 to CT. The highest NO3-N concentration was observed at 50 cm depth in TC (60.16 mg L-1). High concentration of NO3-N in CT may be attributed to increase in decomposition of soil organic matter and crop residues incorporated into the soil.
Resumo:
Elemental analysis can become an important piece of evidence to assist the solution of a case. The work presented in this dissertation aims to evaluate the evidential value of the elemental composition of three particular matrices: ink, paper and glass. In the first part of this study, the analytical performance of LIBS and LA-ICP-MS methods was evaluated for paper, writing inks and printing inks. A total of 350 ink specimens were examined including black and blue gel inks, ballpoint inks, inkjets and toners originating from several manufacturing sources and/or batches. The paper collection set consisted of over 200 paper specimens originating from 20 different paper sources produced by 10 different plants. Micro-homogeneity studies show smaller variation of elemental compositions within a single source (i.e., sheet, pen or cartridge) than the observed variation between different sources (i.e., brands, types, batches). Significant and detectable differences in the elemental profile of the inks and paper were observed between samples originating from different sources (discrimination of 87 – 100% of samples, depending on the sample set under investigation and the method applied). These results support the use of elemental analysis, using LA-ICP-MS and LIBS, for the examination of documents and provide additional discrimination to the currently used techniques in document examination. In the second part of this study, a direct comparison between four analytical methods (µ-XRF, solution-ICP-MS, LA-ICP-MS and LIBS) was conducted for glass analyses using interlaboratory studies. The data provided by 21 participants were used to assess the performance of the analytical methods in associating glass samples from the same source and differentiating different sources, as well as the use of different match criteria (confidence interval (±6s, ±5s, ±4s, ±3s, ±2s), modified confidence interval, t-test (sequential univariate, p=0.05 and p=0.01), t-test with Bonferroni correction (for multivariate comparisons), range overlap, and Hotelling’s T2 tests. Error rates (Type 1 and Type 2) are reported for the use of each of these match criteria and depend on the heterogeneity of the glass sources, the repeatability between analytical measurements, and the number of elements that were measured. The study provided recommendations for analytical performance-based parameters for µ-XRF and LA-ICP-MS as well as the best performing match criteria for both analytical techniques, which can be applied now by forensic glass examiners.
Resumo:
We consider the Cauchy problem for the Laplace equation in 3-dimensional doubly-connected domains, that is the reconstruction of a harmonic function from knowledge of the function values and normal derivative on the outer of two closed boundary surfaces. We employ the alternating iterative method, which is a regularizing procedure for the stable determination of the solution. In each iteration step, mixed boundary value problems are solved. The solution to each mixed problem is represented as a sum of two single-layer potentials giving two unknown densities (one for each of the two boundary surfaces) to determine; matching the given boundary data gives a system of boundary integral equations to be solved for the densities. For the discretisation, Weinert's method [24] is employed, which generates a Galerkin-type procedure for the numerical solution via rewriting the boundary integrals over the unit sphere and expanding the densities in terms of spherical harmonics. Numerical results are included as well.
Resumo:
Binge eating occurs primarily on highly palatable food (PF) suggesting that the reward value of food has an important role in this behaviour. Bingeing also leads to reward dysfunction in rats and humans. The rewarding effect of binge eating may involve opioid mechanisms as opioid antagonists reduce PF consumption in animals that binge eat and binge eating produces neuroadaptations of opioid receptors in rodents. We tested this hypothesis by using the conditioned place preference (CPP) paradigm. First we established a sucrose CPP in male and female Long-Evans rats (n=8 for each group) using 1%, 5%, 15%, or 30% sucrose solution. Next, rats underwent the sucrose bingeing model in which separate groups of rats (n=8 for each group) received 12hr and 24hr access to 10% sucrose solution and chow, 12hr access to 0.1% saccharin solution and chow, or 12hr access to chow only every day for 28 days. Immediately following these sessions, rats were conditioned and tested in the CPP paradigm using a 15% sucrose solution. Finally, we examined whether the sucrose bingeing model altered morphine reward in female rats. Rats (n=8 for each group) received 12hr and 24hr access to 10% sucrose solution and chow every day for 28 days. Immediately following this access period, rats were conditioned to morphine (6mL/kg) or saline solution in the CPP paradigm and tested for a CPP. In all experiments, rats drank more sucrose solution than water during conditioning sessions. Male rats did not develop a CPP to any concentration of sucrose solution and females developed a CPP to 15% sucrose solution only. Following the sucrose bingeing protocol, sucrose CPP was attenuated in male rats that binged on sucrose and in all female rats. Sucrose bingeing in females did not affect the development of a CPP to morphine. These results suggest that sucrose consumption and sucrose CPP are measures of different psychological components of reward. Furthermore, sucrose bingeing reduces the rewarding effect of sucrose, but not morphine, suggesting that opioid reward is still intact.
Resumo:
Estudios afirman que la acidez gástrica es importante en la absorción de levotiroxina (LT4) con resultados controversiales sobre la interacción entre inhibidores de bomba de protones (IBP) y LT4. El objetivo del estudio fue establecer el efecto del uso concomitante de LT4 e IBP en los niveles de TSH en pacientes adultos con hipotiroidismo primario. Se realizó una revisión sistemática mediante búsqueda en Medline, Embase, Lilacs, Bireme, Scielo, Cochrane y Universidad de York, Access Pharmacy, Google Scholar, Dialnet y Opengray. La búsqueda no se limito por lenguaje. Se evaluó el efecto en la diferencia de medias de TSH luego del consumo de LT4 y luego del consumo concomitante con IBP. Se hizo un metaanálisis, análisis de subgrupos y análisis de sensibilidad utilizando el programa Review Manager 5.3. Se eligieron 5 artículos para el análisis cualitativo y 3 para el metaanálisis. La calidad de los estudios fue buena y el riesgo de sesgos bajo. La diferencia de medias obtenida fue 0.21 mUI/L (IC95%: 0.02-0.40; p=0.03; I2:0%). En el análisis de subgrupos en pacientes mayores de 55 años la diferencia de medias fue 0.21 mUI/L (IC95%: 0.01-0.40; p=0.27; I2:19%). En el análisis de sensibilidad se excluyo el estudio con mayor muestra y la diferencia de medias fue 0.49 mUI/L (IC95%: -0.12 a 1.11; p=0.12; I2:0%). La diferencia de medias de TSH luego del consumo concomitante no se considera clínicamente significativa pues no representa riesgo para el paciente. Son necesarios estudios clínicos aleatorizados y evaluar el efecto en los niveles de T4 libre.
Resumo:
To compare time and risk to biochemical recurrence (BR) after radical prostatectomy of two chronologically different groups of patients using the standard and the modified Gleason system (MGS). Cohort 1 comprised biopsies of 197 patients graded according to the standard Gleason system (SGS) in the period 1997/2004, and cohort 2, 176 biopsies graded according to the modified system in the period 2005/2011. Time to BR was analyzed with the Kaplan-Meier product-limit analysis and prediction of shorter time to recurrence using univariate and multivariate Cox proportional hazards model. Patients in cohort 2 reflected time-related changes: striking increase in clinical stage T1c, systematic use of extended biopsies, and lower percentage of total length of cancer in millimeter in all cores. The MGS used in cohort 2 showed fewer biopsies with Gleason score ≤ 6 and more biopsies of the intermediate Gleason score 7. Time to BR using the Kaplan-Meier curves showed statistical significance using the MGS in cohort 2, but not the SGS in cohort 1. Only the MGS predicted shorter time to BR on univariate analysis and on multivariate analysis was an independent predictor. The results favor that the 2005 International Society of Urological Pathology modified system is a refinement of the Gleason grading and valuable for contemporary clinical practice.
Resumo:
This study aimed at evaluating whether human papillomavirus (HPV) groups and E6/E7 mRNA of HPV 16, 18, 31, 33, and 45 are prognostic of cervical intraepithelial neoplasia (CIN) 2 outcome in women with a cervical smear showing a low-grade squamous intraepithelial lesion (LSIL). This cohort study included women with biopsy-confirmed CIN 2 who were followed up for 12 months, with cervical smear and colposcopy performed every three months. Women with a negative or low-risk HPV status showed 100% CIN 2 regression. The CIN 2 regression rates at the 12-month follow-up were 69.4% for women with alpha-9 HPV versus 91.7% for other HPV species or HPV-negative status (P < 0.05). For women with HPV 16, the CIN 2 regression rate at the 12-month follow-up was 61.4% versus 89.5% for other HPV types or HPV-negative status (P < 0.05). The CIN 2 regression rate was 68.3% for women who tested positive for HPV E6/E7 mRNA versus 82.0% for the negative results, but this difference was not statistically significant. The expectant management for women with biopsy-confirmed CIN 2 and previous cytological tests showing LSIL exhibited a very high rate of spontaneous regression. HPV 16 is associated with a higher CIN 2 progression rate than other HPV infections. HPV E6/E7 mRNA is not a prognostic marker of the CIN 2 clinical outcome, although this analysis cannot be considered conclusive. Given the small sample size, this study could be considered a pilot for future larger studies on the role of predictive markers of CIN 2 evolution.