938 resultados para Non-negative rational numbers
Resumo:
The paper derives operational principles from environmental ethics for business organizations in order to achieve sustainability. Business affects the natural environment at different levels. Individual biological creatures are affected by business via hunting, fishing, agriculture, animal testing, etc. Natural ecosystems are affected by business via mining, regulating rivers, building, polluting the air, water and land, etc. The Earth as a whole is affected by business via exterminating species, contributing to climate change, etc. Business has a natural, non-reciprocal responsibility toward natural beings affected by its functioning. At the level of individual biological creatures, awareness-based ethics is adequate for business. It implies that business should assure natural life conditions and painless existence for animals and other sentient beings. From this point of view a business activity system can be considered acceptable only if its aggregate impact on animal welfare is non-negative. At the level of natural ecosystems, ecosystem ethics is relevant for business. It implies that business should use natural ecosystems in a proper way, that is, not damaging the health of the ecosystem during use. From this point of view a business activity system can be considered acceptable only if its aggregate impact on ecosystem health is non-negative. At the level of the Earth as a whole, Gaian ethics applies to business. Its implication is that business should not contribute to the violation of the systemic patterns and global mechanisms of the Earth. From this point of view a business activity system can be considered acceptable only if its aggregate impact on the living planet is non-negative. Satisfying the above principles can assure business sustainability in an ethically meaningful way. In this case business performs its duty: not to harm nature or allow others to come to harm.
Resumo:
This dissertation introduced substance abuse to the Dynamic Vulnerability Formulation (DVF) and the social competence model to determine if the relationship between schizophrenic symptomatology and coping ability in the DVF applied also to the dually diagnosed schizophrenic or if these variables needed to be modified. It compared the coping abilities of dually and singly diagnosed clients in day treatment and identified, examined, and assessed the relative influence of relevant mediating variables on two dimensions of coping ability of the dually diagnosed: coping skills and coping effort. These variables were: presence of negative and nonnegative symptoms, duration of mental illness, type of substance used, and age of first substance use.^ A priori effect sizes based on previous empirical research were used to interpret the results related to the comparison of demographic, socioeconomic, and treatment characteristics between the singly and dually diagnosed study samples. The data suggested that the singly diagnosed group had higher coping skills than the dually diagnosed group, particularly in the areas of housing stability, work affect, and total social adjustment. The dually diagnosed group had lower scores on one aspect of coping effort--agency or self-efficacy. The data supported the presence of an inverse relationship between symptom severity and coping skills, particularly for the dually diagnosed group. The data did not support the presence of an inverse relationship between symptom severity and coping effort, but did suggest a positive relationship between symptom severity and one measure of coping effort, agency, for the dually diagnosed group. Regression equations using each summary measure of coping skill--social adjustment and role functioning--yielded statistically significant F-ratios. Thirty-six percent of the variance in social adjustment and thirty-one percent of the variance in role functioning were explained by the relative influence of the relevant variables. Both negative and non-negative symptoms were the only significant predictors of social adjustment. The non-negative symptoms variable was the sole significant predictor of role functioning. The results of this study provided partial support for the use of the Dynamic Vulnerability Formulation (DVF) with the dually diagnosed. ^
Resumo:
A hydrodynamic threshold between Darcian and non-Darcian flow conditions was found to occur in cubes of Key Largo Limestone from Florida, USA (one cube measuring 0.2 m on each side, the other 0.3 m) at an effective porosity of 33% and a hydraulic conductivity of 10 m/day. Below these values, flow was laminar and could be described as Darcian. Above these values, hydraulic conductivity increased greatly and flow was non-laminar. Reynolds numbers (Re) for these experiments ranged from
Resumo:
We consider a class of initial data sets (Σ,h,K) for the Einstein constraint equations which we define to be generalized Brill (GB) data. This class of data is simply connected, U(1)²-invariant, maximal, and four-dimensional with two asymptotic ends. We study the properties of GB data and in particular the topology of Σ. The GB initial data sets have applications in geometric inequalities in general relativity. We construct a mass functional M for GB initial data sets and we show:(i) the mass of any GB data is greater than or equals M, (ii) it is a non-negative functional for a broad subclass of GB data, (iii) it evaluates to the ADM mass of reduced t − φi symmetric data set, (iv) its critical points are stationary U(1)²-invariant vacuum solutions to the Einstein equations. Then we use this mass functional and prove two geometric inequalities: (1) a positive mass theorem for subclass of GB initial data which includes Myers-Perry black holes, (2) a class of local mass-angular momenta inequalities for U(1)²-invariant black holes. Finally, we construct a one-parameter family of initial data sets which we show can be seen as small deformations of the extreme Myers- Perry black hole which preserve the horizon geometry and angular momenta but have strictly greater energy.
Resumo:
L'evoluzione tecnologica e l'utilizzo crescente della computer grafica in diversi settori stanno suscitando l'interesse di sempre più persone verso il mondo della modellazione 3D. I software di modellazione, tuttavia, si presentano spesso inadeguati all'utilizzo da parte di utenti senza esperienza, soprattutto a causa dei comandi di navigazione e modellazione poco intuitivi. Dal punto di vista dell'interazione uomo-computer, questi software devono infatti affrontare un grande ostacolo: il rapporto tra dispositivi di input 2D (come il mouse) e la manipolazione di una scena 3D. Il progetto presentato in questa tesi è un addon per Blender che consente di utilizzare il dispositivo Leap Motion come ausilio alla modellazione di superfici in computer grafica. L'obiettivo di questa tesi è stato quello di progettare e realizzare un'interfaccia user-friendly tra Leap e Blender, in modo da potere utilizzare i sensori del primo per facilitare ed estendere i comandi di navigazione e modellazione del secondo. L'addon realizzato per Blender implementa il concetto di LAM (Leap Aided Modelling: modellazione assistita da Leap), consentendo quindi di estendere le feature di Blender riguardanti la selezione, lo spostamento e la modifica degli oggetti in scena, la manipolazione della vista utente e la modellazione di curve e superfici Non Uniform Rational B-Splines (NURBS). Queste estensioni sono state create per rendere più veloci e semplici le operazioni altrimenti guidate esclusivamente da mouse e tastiera.
Resumo:
En los últimos años se ha incrementado el interés de la comunidad científica en la Factorización de matrices no negativas (Non-negative Matrix Factorization, NMF). Este método permite transformar un conjunto de datos de grandes dimensiones en una pequeña colección de elementos que poseen semántica propia en el contexto del análisis. En el caso de Bioinformática, NMF suele emplearse como base de algunos métodos de agrupamiento de datos, que emplean un modelo estadístico para determinar el número de clases más favorable. Este modelo requiere de una gran cantidad de ejecuciones de NMF con distintos parámetros de entrada, lo que representa una enorme carga de trabajo a nivel computacional. La mayoría de las implementaciones de NMF han ido quedando obsoletas ante el constante crecimiento de los datos que la comunidad científica busca analizar, bien sea porque los tiempos de cómputo llegan a alargarse hasta convertirse en inviables, o porque el tamaño de esos datos desborda los recursos del sistema. Por ello, esta tesis doctoral se centra en la optimización y paralelización de la factorización NMF, pero no solo a nivel teórico, sino con el objetivo de proporcionarle a la comunidad científica una nueva herramienta para el análisis de datos de origen biológico. NMF expone un alto grado de paralelismo a nivel de datos, de granularidad variable; mientras que los métodos de agrupamiento mencionados anteriormente presentan un paralelismo a nivel de cómputo, ya que las diversas instancias de NMF que se ejecutan son independientes. Por tanto, desde un punto de vista global, se plantea un modelo de optimización por capas donde se emplean diferentes tecnologías de alto rendimiento...
Resumo:
Abstract
Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.
The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.
The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.
The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.
Resumo:
This work outlines the theoretical advantages of multivariate methods in biomechanical data, validates the proposed methods and outlines new clinical findings relating to knee osteoarthritis that were made possible by this approach. New techniques were based on existing multivariate approaches, Partial Least Squares (PLS) and Non-negative Matrix Factorization (NMF) and validated using existing data sets. The new techniques developed, PCA-PLS-LDA (Principal Component Analysis – Partial Least Squares – Linear Discriminant Analysis), PCA-PLS-MLR (Principal Component Analysis – Partial Least Squares –Multiple Linear Regression) and Waveform Similarity (based on NMF) were developed to address the challenging characteristics of biomechanical data, variability and correlation. As a result, these new structure-seeking technique revealed new clinical findings. The first new clinical finding relates to the relationship between pain, radiographic severity and mechanics. Simultaneous analysis of pain and radiographic severity outcomes, a first in biomechanics, revealed that the knee adduction moment’s relationship to radiographic features is mediated by pain in subjects with moderate osteoarthritis. The second clinical finding was quantifying the importance of neuromuscular patterns in brace effectiveness for patients with knee osteoarthritis. I found that brace effectiveness was more related to the patient’s unbraced neuromuscular patterns than it was to mechanics, and that these neuromuscular patterns were more complicated than simply increased overall muscle activity, as previously thought.
Resumo:
Spectral unmixing (SU) is a technique to characterize mixed pixels of the hyperspectral images measured by remote sensors. Most of the existing spectral unmixing algorithms are developed using the linear mixing models. Since the number of endmembers/materials present at each mixed pixel is normally scanty compared with the number of total endmembers (the dimension of spectral library), the problem becomes sparse. This thesis introduces sparse hyperspectral unmixing methods for the linear mixing model through two different scenarios. In the first scenario, the library of spectral signatures is assumed to be known and the main problem is to find the minimum number of endmembers under a reasonable small approximation error. Mathematically, the corresponding problem is called the $\ell_0$-norm problem which is NP-hard problem. Our main study for the first part of thesis is to find more accurate and reliable approximations of $\ell_0$-norm term and propose sparse unmixing methods via such approximations. The resulting methods are shown considerable improvements to reconstruct the fractional abundances of endmembers in comparison with state-of-the-art methods such as having lower reconstruction errors. In the second part of the thesis, the first scenario (i.e., dictionary-aided semiblind unmixing scheme) will be generalized as the blind unmixing scenario that the library of spectral signatures is also estimated. We apply the nonnegative matrix factorization (NMF) method for proposing new unmixing methods due to its noticeable supports such as considering the nonnegativity constraints of two decomposed matrices. Furthermore, we introduce new cost functions through some statistical and physical features of spectral signatures of materials (SSoM) and hyperspectral pixels such as the collaborative property of hyperspectral pixels and the mathematical representation of the concentrated energy of SSoM for the first few subbands. Finally, we introduce sparse unmixing methods for the blind scenario and evaluate the efficiency of the proposed methods via simulations over synthetic and real hyperspectral data sets. The results illustrate considerable enhancements to estimate the spectral library of materials and their fractional abundances such as smaller values of spectral angle distance (SAD) and abundance angle distance (AAD) as well.
Resumo:
We show that the theory of involutive bases can be combined with discrete algebraic Morse Theory. For a graded k[x0 ...,xn]-module M, this yields a free resolution G, which in general is not minimal. We see that G is isomorphic to the resolution induced by an involutive basis. It is possible to identify involutive bases inside the resolution G. The shape of G is given by a concrete description. Regarding the differential dG, several rules are established for its computation, which are based on the fact that in the computation of dG certain patterns appear at several positions. In particular, it is possible to compute the constants independent of the remainder of the differential. This allows us, starting from G, to determine the Betti numbers of M without computing a minimal free resolution: Thus we obtain a new algorithm to compute Betti numbers. This algorithm has been implemented in CoCoALib by Mario Albert. This way, in comparison to some other computer algebra system, Betti numbers can be computed faster in most of the examples we have considered. For Veronese subrings S(d), we have found a Pommaret basis, which yields new proofs for some known properties of these rings. Via the theoretical statements found for G, we can identify some generators of modules in G where no constants appear. As a direct consequence, some non-vanishing Betti numbers of S(d) can be given. Finally, we give a proof of the Hyperplane Restriction Theorem with the help of Pommaret bases. This part is largely independent of the other parts of this work.
Resumo:
No estudo de séries temporais, os processos estocásticos usuais assumem que as distribuições marginais são contínuas e, em geral, não são adequados para modelar séries de contagem, pois as suas características não lineares colocam alguns problemas estatísticos, principalmente na estimação dos parâmetros. Assim, investigou-se metodologias apropriadas de análise e modelação de séries com distribuições marginais discretas. Neste contexto, Al-Osh and Alzaid (1987) e McKenzie (1988) introduziram na literatura a classe dos modelos autorregressivos com valores inteiros não negativos, os processos INAR. Estes modelos têm sido frequentemente tratados em artigos científicos ao longo das últimas décadas, pois a sua importância nas aplicações em diversas áreas do conhecimento tem despertado um grande interesse no seu estudo. Neste trabalho, após uma breve revisão sobre séries temporais e os métodos clássicos para a sua análise, apresentamos os modelos autorregressivos de valores inteiros não negativos de primeira ordem INAR (1) e a sua extensão para uma ordem p, as suas propriedades e alguns métodos de estimação dos parâmetros nomeadamente, o método de Yule-Walker, o método de Mínimos Quadrados Condicionais (MQC), o método de Máxima Verosimilhança Condicional (MVC) e o método de Quase Máxima Verosimilhança (QMV). Apresentamos também um critério automático de seleção de ordem para modelos INAR, baseado no Critério de Informação de Akaike Corrigido, AICC, um dos critérios usados para determinar a ordem em modelos autorregressivos, AR. Finalmente, apresenta-se uma aplicação da metodologia dos modelos INAR em dados reais de contagem relativos aos setores dos transportes marítimos e atividades de seguros de Cabo Verde.
Resumo:
This analysis paper presents previously unknown properties of some special cases of the Wright function whose consideration is necessitated by our work on probability theory and the theory of stochastic processes. Specifically, we establish new asymptotic properties of the particular Wright function 1Ψ1(ρ, k; ρ, 0; x) = X∞ n=0 Γ(k + ρn) Γ(ρn) x n n! (|x| < ∞) when the parameter ρ ∈ (−1, 0)∪(0, ∞) and the argument x is real. In the probability theory applications, which are focused on studies of the Poisson-Tweedie mixtures, the parameter k is a non-negative integer. Several representations involving well-known special functions are given for certain particular values of ρ. The asymptotics of 1Ψ1(ρ, k; ρ, 0; x) are obtained under numerous assumptions on the behavior of the arguments k and x when the parameter ρ is both positive and negative. We also provide some integral representations and structural properties involving the ‘reduced’ Wright function 0Ψ1(−−; ρ, 0; x) with ρ ∈ (−1, 0) ∪ (0, ∞), which might be useful for the derivation of new properties of members of the power-variance family of distributions. Some of these imply a reflection principle that connects the functions 0Ψ1(−−;±ρ, 0; ·) and certain Bessel functions. Several asymptotic relationships for both particular cases of this function are also given. A few of these follow under additional constraints from probability theory results which, although previously available, were unknown to analysts.
Resumo:
We describe an integration of the SVC decision procedure with the HOL theorem prover. This integration was achieved using the PROSPER toolkit. The SVC decision procedure operates on rational numbers, an axiomatic theory for which was provided in HOL. The decision procedure also returns counterexamples and a framework has been devised for handling counterexamples in a HOL setting.
Resumo:
This research aimed to investigate the possibility to develop the process of teaching and learning of the division of rational numbers with guided tasks in interpretation of measure. Adopted as methodology the Didactic Engineering and a didactic sequence in order to develop the work with students of High School. Participated of training sessions twelve students of one state school of Porto Barreiro city - Paran´a. The results of application of the didactic engineering suggest the importance of utilization of guided tasks in interpretation of measure, since strengthened the understanding, on the part of students, the concept of division of fractional rational numbers and contributed for them develop the comprehension of others questions associated to the concept of rational numbers, such as order, equivalence and density.
Resumo:
n this paper we deal with the problem of obtaining the set of k-additive measures dominating a fuzzy measure. This problem extends the problem of deriving the set of probabilities dominating a fuzzy measure, an important problem appearing in Decision Making and Game Theory. The solution proposed in the paper follows the line developed by Chateauneuf and Jaffray for dominating probabilities and continued by Miranda et al. for dominating k-additive belief functions. Here, we address the general case transforming the problem into a similar one such that the involved set functions have non-negative Möbius transform; this simplifies the problem and allows a result similar to the one developed for belief functions. Although the set obtained is very large, we show that the conditions cannot be sharpened. On the other hand, we also show that it is possible to define a more restrictive subset, providing a more natural extension of the result for probabilities, such that it is possible to derive any k-additive dominating measure from it.