949 resultados para Non-Negative Operators
Resumo:
In questo lavoro si analizza l’imposizione indiretta nel commercio elettronico; l’analisi si è basata sullo studio della normativa Comunitaria (Direttive Europee ) e la legislazione e Italiana, esponendo anche le differenze con il profilo legislativo brasiliano (softwares e libri).Esposti i contributi delle istituzione internazionali (conferenze ed/o proposte della Unione Europea) per l’inquadramento tipologico e fiscale del commercio elettronico, sono stati analizzati i profili generali dell’istituto della stabile organizzazione ai fini dell’imposizione dell’IVA e al commercio elettronico, distinguendo anche tra le transazioni elettroniche considerabili come cessione di beni e quelle considerabili prestazioni di servizi, in base alla materialità o alla dematerializzazione del bene scambiato. Anche il principio di territorialità nelle prestazioni di servizi è stato analizzato tramite analisi dei regimi ordinario e speciale riguardanti gli operatori extracomunitari.
Resumo:
Die vorliegende Arbeit behandelt die Entwicklung und Verbesserung von linear skalierenden Algorithmen für Elektronenstruktur basierte Molekulardynamik. Molekulardynamik ist eine Methode zur Computersimulation des komplexen Zusammenspiels zwischen Atomen und Molekülen bei endlicher Temperatur. Ein entscheidender Vorteil dieser Methode ist ihre hohe Genauigkeit und Vorhersagekraft. Allerdings verhindert der Rechenaufwand, welcher grundsätzlich kubisch mit der Anzahl der Atome skaliert, die Anwendung auf große Systeme und lange Zeitskalen. Ausgehend von einem neuen Formalismus, basierend auf dem großkanonischen Potential und einer Faktorisierung der Dichtematrix, wird die Diagonalisierung der entsprechenden Hamiltonmatrix vermieden. Dieser nutzt aus, dass die Hamilton- und die Dichtematrix aufgrund von Lokalisierung dünn besetzt sind. Das reduziert den Rechenaufwand so, dass er linear mit der Systemgröße skaliert. Um seine Effizienz zu demonstrieren, wird der daraus entstehende Algorithmus auf ein System mit flüssigem Methan angewandt, das extremem Druck (etwa 100 GPa) und extremer Temperatur (2000 - 8000 K) ausgesetzt ist. In der Simulation dissoziiert Methan bei Temperaturen oberhalb von 4000 K. Die Bildung von sp²-gebundenem polymerischen Kohlenstoff wird beobachtet. Die Simulationen liefern keinen Hinweis auf die Entstehung von Diamant und wirken sich daher auf die bisherigen Planetenmodelle von Neptun und Uranus aus. Da das Umgehen der Diagonalisierung der Hamiltonmatrix die Inversion von Matrizen mit sich bringt, wird zusätzlich das Problem behandelt, eine (inverse) p-te Wurzel einer gegebenen Matrix zu berechnen. Dies resultiert in einer neuen Formel für symmetrisch positiv definite Matrizen. Sie verallgemeinert die Newton-Schulz Iteration, Altmans Formel für beschränkte und nicht singuläre Operatoren und Newtons Methode zur Berechnung von Nullstellen von Funktionen. Der Nachweis wird erbracht, dass die Konvergenzordnung immer mindestens quadratisch ist und adaptives Anpassen eines Parameters q in allen Fällen zu besseren Ergebnissen führt.
Resumo:
Il crescente utilizzo di sistemi di analisi high-throughput per lo studio dello stato fisiologico e metabolico del corpo, ha evidenziato che una corretta alimentazione e una buona forma fisica siano fattori chiave per la salute. L'aumento dell'età media della popolazione evidenzia l'importanza delle strategie di contrasto delle patologie legate all'invecchiamento. Una dieta sana è il primo mezzo di prevenzione per molte patologie, pertanto capire come il cibo influisce sul corpo umano è di fondamentale importanza. In questo lavoro di tesi abbiamo affrontato la caratterizzazione dei sistemi di imaging radiografico Dual-energy X-ray Absorptiometry (DXA). Dopo aver stabilito una metodologia adatta per l'elaborazione di dati DXA su un gruppo di soggetti sani non obesi, la PCA ha evidenziato alcune proprietà emergenti dall'interpretazione delle componenti principali in termini delle variabili di composizione corporea restituite dalla DXA. Le prime componenti sono associabili ad indici macroscopici di descrizione corporea (come BMI e WHR). Queste componenti sono sorprendentemente stabili al variare dello status dei soggetti in età, sesso e nazionalità. Dati di analisi metabolica, ottenuti tramite Magnetic Resonance Spectroscopy (MRS) su campioni di urina, sono disponibili per circa mille anziani (provenienti da cinque paesi europei) di età compresa tra i 65 ed i 79 anni, non affetti da patologie gravi. I dati di composizione corporea sono altresì presenti per questi soggetti. L'algoritmo di Non-negative Matrix Factorization (NMF) è stato utilizzato per esprimere gli spettri MRS come combinazione di fattori di base interpretabili come singoli metaboliti. I fattori trovati sono stabili, quindi spettri metabolici di soggetti sono composti dallo stesso pattern di metaboliti indipendentemente dalla nazionalità. Attraverso un'analisi a singolo cieco sono stati trovati alti valori di correlazione tra le variabili di composizione corporea e lo stato metabolico dei soggetti. Ciò suggerisce la possibilità di derivare la composizione corporea dei soggetti a partire dal loro stato metabolico.
Resumo:
INTRODUCTION: Ultra-high-field whole-body systems (7.0 T) have a high potential for future human in vivo magnetic resonance imaging (MRI). In musculoskeletal MRI, biochemical imaging of articular cartilage may benefit, in particular. Delayed gadolinium-enhanced MRI of cartilage (dGEMRIC) and T2 mapping have shown potential at 3.0 T. Although dGEMRIC, allows the determination of the glycosaminoglycan content of articular cartilage, T2 mapping is a promising tool for the evaluation of water and collagen content. In addition, the evaluation of zonal variation, based on tissue anisotropy, provides an indicator of the nature of cartilage ie, hyaline or hyaline-like articular cartilage.Thus, the aim of our study was to show the feasibility of in vivo dGEMRIC, and T2 and T2* relaxation measurements, at 7.0 T MRI; and to evaluate the potential of T2 and T2* measurements in an initial patient study after matrix-associated autologous chondrocyte transplantation (MACT) in the knee. MATERIALS AND METHODS: MRI was performed on a whole-body 7.0 T MR scanner using a dedicated circular polarization knee coil. The protocol consisted of an inversion recovery sequence for dGEMRIC, a multiecho spin-echo sequence for standard T2 mapping, a gradient-echo sequence for T2* mapping and a morphologic PD SPACE sequence. Twelve healthy volunteers (mean age, 26.7 +/- 3.4 years) and 4 patients (mean age, 38.0 +/- 14.0 years) were enrolled 29.5 +/- 15.1 months after MACT. For dGEMRIC, 5 healthy volunteers (mean age, 32.4 +/- 11.2 years) were included. T1 maps were calculated using a nonlinear, 2-parameter, least squares fit analysis. Using a region-of-interest analysis, mean cartilage relaxation rate was determined as T1 (0) for precontrast measurements and T1 (Gd) for postcontrast gadopentate dimeglumine [Gd-DTPA(2-)] measurements. T2 and T2* maps were obtained using a pixelwise, monoexponential, non-negative least squares fit analysis; region-of-interest analysis was carried out for deep and superficial cartilage aspects. Statistical evaluation was performed by analyses of variance. RESULTS: Mean T1 (dGEMRIC) values for healthy volunteers showed slightly different results for femoral [T1 (0): 1259 +/- 277 ms; T1 (Gd): 683 +/- 141 ms] compared with tibial cartilage [T1 (0): 1093 +/- 281 ms; T1 (Gd): 769 +/- 150 ms]. Global mean T2 relaxation for healthy volunteers showed comparable results for femoral (T2: 56.3 +/- 15.2 ms; T2*: 19.7 +/- 6.4 ms) and patellar (T2: 54.6 +/- 13.0 ms; T2*: 19.6 +/- 5.2 ms) cartilage, but lower values for tibial cartilage (T2: 43.6 +/- 8.5 ms; T2*: 16.6 +/- 5.6 ms). All healthy cartilage sites showed a significant increase from deep to superficial cartilage (P < 0.001). Within healthy cartilage sites in MACT patients, adequate values could be found for T2 (56.6 +/- 13.2 ms) and T2* (18.6 +/- 5.3 ms), which also showed a significant stratification. Within cartilage repair tissue, global mean values showed no difference, with 55.9 +/- 4.9 ms for T2 and 16.2 +/- 6.3 ms for T2*. However, zonal assessment showed only a slight and not significant increase from deep to superficial cartilage (T2: P = 0.174; T2*: P = 0.150). CONCLUSION: In vivo T1 dGEMRIC assessment in healthy cartilage, and T2 and T2* mapping in healthy and reparative articular cartilage, seems to be possible at 7.0 T MRI. For T2 and T2*, zonal variation of articular cartilage could also be evaluated at 7.0 T. This zonal assessment of deep and superficial cartilage aspects shows promising results for the differentiation of healthy and affected articular cartilage. In future studies, optimized protocol selection, and sophisticated coil technology, together with increased signal at ultra-high-field MRI, may lead to advanced biochemical cartilage imaging.
Resumo:
OBJECTIVE: The aim of our study was to correlate global T2 values of microfracture repair tissue (RT) with clinical outcome in the knee joint. METHODS: We assessed 24 patients treated with microfracture in the knee joint. Magnetic resonance (MR) examinations were performed on a 3T MR unit, T2 relaxation times were obtained with a multi-echo spin-echo technique. T2 maps were obtained using a pixel wise, mono-exponential non-negative least squares fit analysis. Slices covering the cartilage RT were selected and region of interest analysis was done. An individual T2 index was calculated with global mean T2 of the RT and global mean T2 of normal, hyaline cartilage. The Lysholm score and the International Knee Documentation Committee (IKDC) knee evaluation forms were used for the assessment of clinical outcome. Bivariate correlation analysis and a paired, two tailed t test were used for statistics. RESULTS: Global T2 values of the RT [mean 49.8ms, standards deviation (SD) 7.5] differed significantly (P<0.001) from global T2 values of normal, hyaline cartilage (mean 58.5ms, SD 7.0). The T2 index ranged from 61.3 to 101.5. We found the T2 index to correlate with outcome of the Lysholm score (r(s)=0.641, P<0.001) and the IKDC subjective knee evaluation form (r(s)=0.549, P=0.005), whereas there was no correlation with the IKDC knee form (r(s)=-0.284, P=0.179). CONCLUSION: These findings indicate that T2 mapping is sensitive to assess RT function and provides additional information to morphologic MRI in the monitoring of microfracture.
Resumo:
In a partially ordered semigroup with the duality (or polarity) transform, it is pos- sible to define a generalisation of continued fractions. General sufficient conditions for convergence of continued fractions are provided. Two particular applications concern the cases of convex sets with the Minkowski addition and the polarity transform and the family of non-negative convex functions with the Legendre–Fenchel and Artstein-Avidan–Milman transforms.
Resumo:
Objective. To measure the demand for primary care and its associated factors by building and estimating a demand model of primary care in urban settings.^ Data source. Secondary data from 2005 California Health Interview Survey (CHIS 2005), a population-based random-digit dial telephone survey, conducted by the UCLA Center for Health Policy Research in collaboration with the California Department of Health Services, and the Public Health Institute between July 2005 and April 2006.^ Study design. A literature review was done to specify the demand model by identifying relevant predictors and indicators. CHIS 2005 data was utilized for demand estimation.^ Analytical methods. The probit regression was used to estimate the use/non-use equation and the negative binomial regression was applied to the utilization equation with the non-negative integer dependent variable.^ Results. The model included two equations in which the use/non-use equation explained the probability of making a doctor visit in the past twelve months, and the utilization equation estimated the demand for primary conditional on at least one visit. Among independent variables, wage rate and income did not affect the primary care demand whereas age had a negative effect on demand. People with college and graduate educational level were associated with 1.03 (p < 0.05) and 1.58 (p < 0.01) more visits, respectively, compared to those with no formal education. Insurance was significantly and positively related to the demand for primary care (p < 0.01). Need for care variables exhibited positive effects on demand (p < 0.01). Existence of chronic disease was associated with 0.63 more visits, disability status was associated with 1.05 more visits, and people with poor health status had 4.24 more visits than those with excellent health status. ^ Conclusions. The average probability of visiting doctors in the past twelve months was 85% and the average number of visits was 3.45. The study emphasized the importance of need variables in explaining healthcare utilization, as well as the impact of insurance, employment and education on demand. The two-equation model of decision-making, and the probit and negative binomial regression methods, was a useful approach to demand estimation for primary care in urban settings.^
Resumo:
We consider non-negative solution of a chemotaxis system with non constant chemotaxis sensitivity function X. This system appears as a limit case of a model formorphogenesis proposed by Bollenbach et al. (Phys. Rev. E. 75, 2007).Under suitable boundary conditions, modeling the presence of a morphogen source at x=0, we prove the existence of a global and bounded weak solution using an approximation by problems where diffusion is introduced in the ordinary differential equation. Moreover,we prove the convergence of the solution to the unique steady state provided that ? is small and ? is large enough. Numerical simulations both illustrate these results and give rise to further conjectures on the solution behavior that go beyond the rigorously proved statements.
Resumo:
This paper describes a fully automatic simultaneous lung vessel and airway enhancement filter. The approach consists of a Frangi-based multiscale vessel enhancement filtering specifically designed for lung vessel and airway detection, where arteries and veins have high contrast with respect to the lung parenchyma, and airway walls are hollow tubular structures with a non negative response using the classical Frangi's filter. The features extracted from the Hessian matrix are used to detect centerlines and approximate walls of airways, decreasing the filter response in those areas by applying a penalty function to the vesselness measure. We validate the segmentation method in 20 CT scans with different pathological states within the VESSEL12 challenge framework. Results indicate that our approach obtains good results, decreasing the number of false positives in airway walls.
Resumo:
In this paper we study non-negative radially symmetric solutions of a parabolic-elliptic Keller-Segel system. The system describes the chemotactic movement of cells under the additional circumstance that an external application of a chemo attractant at a distinguished point is introduced.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
For non-negative random variables with finite means we introduce an analogous of the equilibrium residual-lifetime distribution based on the quantile function. This allows us to construct new distributions with support (0, 1), and to obtain a new quantile-based version of the probabilistic generalization of Taylor's theorem. Similarly, for pairs of stochastically ordered random variables we come to a new quantile-based form of the probabilistic mean value theorem. The latter involves a distribution that generalizes the Lorenz curve. We investigate the special case of proportional quantile functions and apply the given results to various models based on classes of distributions and measures of risk theory. Motivated by some stochastic comparisons, we also introduce the “expected reversed proportional shortfall order”, and a new characterization of random lifetimes involving the reversed hazard rate function.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-04
Resumo:
For all odd integers n greater than or equal to 1, let G(n) denote the complete graph of order n, and for all even integers n greater than or equal to 2 let G,, denote the complete graph of order n with the edges of a 1-factor removed. It is shown that for all non-negative integers h and t and all positive integers n, G, can be decomposed into h Hamilton cycles and t triangles if and only if nh + 3t is the number of edges in G(n). (C) 2004 Wiley Periodicals, Inc.
Resumo:
Objective To give an account of the views held by Australian veterinarians who work with horses on the future of their professional field. Method Questionnaires were mailed to 866 veterinarians who had been identified as working with horses, and 87% were completed and returned. Data were entered onto an Excel spreadsheet, and analysed using the SAS System for Windows. Results Their future prospects were believed to be very good or excellent by >60% of equine veterinarians but by only 30% of mixed practitioners seeing < 10% horses. The main factors believed likely to affect these prospects were the strength of the equine industries and the economic climate affecting horse owners, followed by the encroachment of cities into areas used for horses, competition from other veterinarians including specialist centres and from non-veterinary operators, and their ability to recruit and retain veterinarians with interest, experience and skill with horses. Urban encroachment, competition and recruitment were especially important for those seeing few horses. Concerns were also expressed about the competence and ethical behaviour of other veterinarians, the physical demands and dangers of horse work, the costs of providing equine veterinary services and of being paid for them, the regulatory restrictions imposed by governments and statutory bodies, the potential effects of litigation, and insurance issues. For many veterinarians in mixed practice these factors have reduced and are likely to reduce further the number of horses seen, to the extent that they have scant optimism about the future of horse work in their practices. Conclusion Economic and local factors will result in an increasing proportion of equine veterinary work being done in specialised equine centres, and the future of horse work in many mixed practices is, at best, precarious. A key factor influencing future prospects will be the availability of competent veterinarians committed to working with horses.