899 resultados para Multi-way cluster
Resumo:
This study represents the first application of multi-way calibration by N-PLS and multi-way curve resolution by PARAFAC to 2D diffusion-edited H-1 NMR spectra. The aim of the analysis was to evaluate the potential for quantification of lipoprotein main- and subtractions in human plasma samples. Multi-way N-PLS calibrations relating the methyl and methylene peaks of lipoprotein lipids to concentrations of the four main lipoprotein fractions as well as 11 subfractions were developed with high correlations (R = 0.75-0.98). Furthermore, a PARAFAC model with four chemically meaningful components was calculated from the 2D diffusion-edited spectra of the methylene peak of lipids. Although the four extracted PARAFAC components represent molecules of sizes that correspond to the four main fractions of lipoproteins, the corresponding concentrations of the four PARAFAC components proved not to be correlated to the reference concentrations of these four fractions in the plasma samples as determined by ultracentrifugation. These results indicate that NMR provides complementary information on the classification of lipoprotein fractions compared to ultracentrifugation. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Several studies have analyzed discretionary accruals to address earnings-smoothing behaviors in the banking industry. We argue that the characteristic link between accruals and earnings may be nonlinear, since both the incentives to manipulate income and the practical way to do so depend partially on the relative size of earnings. Given a sample of 15,268 US banks over the period 1996–2011, the main results in this paper suggest that, depending on the size of earnings, bank managers tend to engage in earnings-decreasing strategies when earnings are negative (“big-bath”), use earnings-increasing strategies when earnings are positive, and use provisions as a smoothing device when earnings are positive and substantial (“cookie-jar” accounting). This evidence, which cannot be explained by the earnings-smoothing hypothesis, is consistent with the compensation theory. Neglecting nonlinear patterns in the econometric modeling of these accruals may lead to misleading conclusions regarding the characteristic strategies used in earnings management.
Resumo:
Background: Serious case reviews and research studies have indicated weaknesses in risk assessments conducted by child protection social workers. Social workers are adept at gathering information but struggle with analysis and assessment of risk. The Department for Education wants to know if the use of a structured decision-making tool can improve child protection assessments of risk.
Methods/design: This multi-site, cluster-randomised trial will assess the effectiveness of the Safeguarding Children Assessment and Analysis Framework (SAAF). This structured decision-making tool aims to improve social workers' assessments of harm, of future risk and parents' capacity to change. The comparison is management as usual.
Inclusion criteria: Children's Services Departments (CSDs) in England willing to make relevant teams available to be randomised, and willing to meet the trial's training and data collection requirements.
Exclusion criteria: CSDs where there were concerns about performance; where a major organisational restructuring was planned or under way; or where other risk assessment tools were in use.
Six CSDs are participating in this study. Social workers in the experimental arm will receive 2 days training in SAAF together with a range of support materials, and access to limited telephone consultation post-training. The primary outcome is child maltreatment. This will be assessed using data collected nationally on two key performance indicators: the first is the number of children in a year who have been subject to a second Child Protection Plan (CPP); the second is the number of re-referrals of children because of related concerns about maltreatment. Secondary outcomes are: i) the quality of assessments judged against a schedule of quality criteria and ii) the relationship between the three assessments required by the structured decision-making tool (level of harm, risk of (re) abuse and prospects for successful intervention).
Discussion: This is the first study to examine the effectiveness of SAAF. It will contribute to a very limited literature on the contribution that structured decision-making tools can make to improving risk assessment and case planning in child protection and on what is involved in their effective implementation.
Resumo:
While the IEEE 802.15.4/Zigbee protocol stack is being considered as a promising technology for low-cost low-power Wireless Sensor Networks (WSNs), several issues in the standard specifications are still open. One of those ambiguous issues is how to build a synchronized multi-hop cluster-tree network, which is quite suitable for ensuring QoS support in WSNs. In fact, the current IEEE 802.15.4/Zigbee specifications restrict the synchronization in the beacon-enabled mode (by the generation of periodic beacon frames) to star-based networks, while it supports multi-hop networking using the peer-to-peer mesh topology, but with no synchronization. Even though both specifications mention the possible use of cluster-tree topologies, which combine multihop and synchronization features, the description on how to effectively construct such a network topology is missing. This paper tackles this problem, unveils the ambiguities regarding the use of the cluster-tree topology and proposes a synchronization mechanism based on Time Division Beacon Scheduling to construct cluster-tree WSNs. We also propose a methodology for an efficient duty cycle management in each router (cluster-head) of a cluster-tree WSN that ensures the fairest use of bandwidth resources. The feasibility of the proposal is clearly demonstrated through an experimental test bed based on our own implementation of the IEEE 802.15.4/Zigbee protocol.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
Background Anemia due to iron deficiency is recognized as one of the major nutritional deficiencies in women and children in developing countries. Daily iron supplementation for pregnant women is recommended in many countries although there are few reports of these programs working efficiently or effectively. Weekly iron-folic acid supplementation (WIFS) and regular deworming treatment is recommended for non-pregnant women living in areas with high rates of anemia. Following a baseline survey to assess the prevalence of anemia, iron deficiency and soil transmitted helminth infections, we implemented a program to make WIFS and regular deworming treatment freely and universally available for all women of reproductive age in two districts of a province in northern Vietnam over a 12 month period. The impact of the program at the population level was assessed in terms of: i) change in mean hemoglobin and iron status indicators, and ii) change in the prevalence of anemia, iron deficiency and hookworm infections. Method Distribution of WIFS and deworming were integrated with routine health services and made available to 52,000 women. Demographic data and blood and stool samples were collected in baseline, and three and 12-month post-implementation surveys using a population-based, stratified multi-stage cluster sampling design. Results The mean Hb increased by 9.6 g/L (95% CI, 5.7, 13.5, p < 0.001) during the study period. Anemia (Hb<120 g/L) was present in 131/349 (37.5%, 95% CI 31.3, 44.8) subjects at baseline, and in 70/363 (19.3%, 95% CI 14.0, 24.6) after twelve months. Iron deficiency reduced from 75/329 (22.8%, 95% CI 16.9, 28.6) to 33/353 (9.3%, 95% CI 5.7, 13.0) by the 12-mnth survey, and hookworm infection from 279/366 (76.2%,, 95% CI 68.6, 83.8) to 66/287 (23.0%, 95% CI 17.5, 28.5) over the same period. Conclusion A free, universal WIFS program with regular deworming was associated with reduced prevalence and severity of anemia, iron deficiency and ho
Resumo:
This work identifies the limitations of n-way data analysis techniques in multidimensional stream data, such as Internet chat room communications data, and establishes a link between data collection and performance of these techniques. Its contributions are twofold. First, it extends data analysis to multiple dimensions by constructing n-way data arrays known as high order tensors. Chat room tensors are generated by a simulator which collects and models actual communication data. The accuracy of the model is determined by the Kolmogorov-Smirnov goodness-of-fit test which compares the simulation data with the observed (real) data. Second, a detailed computational comparison is performed to test several data analysis techniques including svd [1], and multi-way techniques including Tucker1, Tucker3 [2], and Parafac [3].
Resumo:
Recent natural disasters such as the Haitian earthquake 2011, the South East Queensland floods 2011, the Japanese earthquake and tsunami 2011 and Hurricane Sandy in the United States of America in 2012, have seen social media platforms changing the face of emergency management communications, not only in times of crisis and also during business-as-usual operations. With social media being such an important and powerful communication tool, especially for emergency management organisations, the question arises as to whether the use of social media in these organisations emerged by considered strategic design or more as a reactive response to a new and popular communication channel. This paper provides insight into how the social media function has been positioned, staffed and managed in organisations throughout the world, with a particular focus on how these factors influence the style of communication used on social media platforms. This study has identified that the social media function falls on a continuum between two polarised models, namely the authoritative one-way communication approach and the more interactive approach that seeks to network and engage with the community through multi-way communication. Factors such the size of the organisation; dedicated resourcing of the social media function; organisational culture and mission statement; the presence of a social media champion within the organisation; management style and knowledge about social media play a key role in determining where on the continuum organisations sit in relation to their social media capability. This review, together with a forthcoming survey of Australian emergency management organisations and local governments, will fill a gap in the current body of knowledge about the evolution, positioning and usage of social media in organisations working in the emergency management field in Australia. These findings will be fed back to Industry for potential inclusion in future strategies and practices.
Resumo:
Sustainable urban development, a major issue at global scale, will become more relevant according to population growth predictions in developed and developing countries. Societal and international recognition of sustainability concerns led to the development of specific tools and procedures, known as sustainability assessments/appraisals (SA). Their effectiveness however, considering that global quality life indicators have worsened since their introduction, has promoted a re-thinking of SA instruments. More precisely, Strategic Environmental Assessment (SEA), – a tool introduced in the European context to evaluate policies, plans, and programmes (PPPs), – is being reconsidered because of several features that seem to limit its effectiveness. Over time, SEA has evolved in response to external and internal factors dealing with technical, procedural, planning and governance systems thus involving a shift of paradigm from EIA-based SEAs (first generation protocols) towards more integrated approaches (second generation ones). Changes affecting SEA are formalised through legislation in each Member State, to guide institutions at regional and local level. Defining SEA effectiveness is quite difficult. Its’ capacity-building process appears quite far from its conclusion, even if any definitive version can be conceptualized. In this paper, we consider some European nations with different planning systems and SA traditions. After the identification of some analytical criteria, a multi-dimensional cluster analysis is developed on some case studies, to outline current weaknesses.
Resumo:
Sustainable urban development, a major issue at global scale, will become more relevant according to population growth predictions in developed and developing countries. Societal and international recognition of sustainability concerns led to the development of specific tools and procedures, known as sustainability assessments/appraisals (SA). Their effectiveness however, considering that global quality life indicators have worsened since their introduction, has promoted a re-thinking of SA instruments. More precisely, Strategic Environmental Assessment (SEA), – a tool introduced in the European context to evaluate policies, plans, and programmes (PPPs), – is being reconsidered because of several features that seem to limit its effectiveness. Over time, SEA has evolved in response to external and internal factors dealing with technical, procedural, planning and governance systems thus involving a shift of paradigm from EIA-based SEAs (first generation protocols) towards more integrated approaches (second generation ones). Changes affecting SEA are formalised through legislation in each Member State, to guide institutions at regional and local level. Defining SEA effectiveness is quite difficult. Its’ capacity-building process appears quite far from its conclusion, even if any definitive version can be conceptualized. In this paper, we consider some European nations with different planning systems and SA traditions. After the identification of some analytical criteria, a multi-dimensional cluster analysis is developed on some case studies, to outline current weaknesses.
Resumo:
Ichthyoplankton samples were collected at approximately 2-week intervals, primarily during spring and summer 1999−2004, from two stations located 20 and 30 km from shore near the Columbia River, Oregon. Northern anchovy (Engraulis mordax) was the most abundant species collected, and was the primary species associated with summer upwelling conditions, but it showed significant interannual and seasonal fluctuations in abundance and occurrence. Other abundant taxa included sanddabs (Citharichthys spp.), English sole (Parophrys vetulus), and blacksmelts (Bathylagidae). Two-way cluster analysis revealed strong species associations based primarily on season (before or after the spring transition date). Ichthyoplankton abundances were compared to biological and environmental data, and egg and larvae abundances were found to be most correlated with sea surface temperature. The Pacific Decadal Oscillation changed sign (from negative to positive) in late 2002 and indicated overall warmer conditions in the North Pacific Ocean. Climate change is expected to alter ocean upwelling, temperatures, and Columbia River flows, and consequently fish eggs and larvae distributions and survival. Long-term research is needed to identify how ichthyoplankton and fish recruitment are affected by regional and largescale oceanographic proces
Resumo:
In order to study the differentiation of Asian colobines, 14 variables measured on 123 skulls, including Rhinopithecus, Presbytis, Presbytiscus (Rhinopithecus avunculus), Pygathrix and Nasalis were analyzed by one-way, cluster and discriminant function analyses. Information on paleoenvironmental changes in China and southeast Asia since the late Tertiary was used to examine the influences of migratory routes and range of distribution in Asian colobines. A cladogram for 6 genera of Asian colobines was constructed from the results of various analyses. Some new points or revisions were suggested: (1) Following one of two migratory routes, ancient species of Asian colobines perhaps passed through Xizang (Tibet) along the northern bank of the Tethys sea and through the Heng Duan Shan regions of Yunnan into Vietnam. An ancient landmass linking Yunnan and Xizang was already present on the east bank of the Tethys sea. Accordingly, Asian colobines would have two centers of evolutionary origin: Sundaland and the Heng Duan Shan regions of China. (2) Pygathrix shares more cranial features with Presbytiscus than with Rhinopithecus. This differs somewhat from the conclusion reached by Groves. (3) Nasalis (karyotype: 2n = 48) may be the most primitive genus among Asian colobines. Certain features shared with Rhinopithecus, e.g. large body size, terrestrial activity and limb proportions, can be interpreted as symple-siomorphic characters. (4) Rhinopithecus, with respect to craniofacial features, is a special case among Asian colobines. It combines a high degree of evolutionary specialization with retention of some primitive features thought to have been present in the ancestral Asian colobine.
Resumo:
网络状态信息收集协议既要保证信息收集的准确性、实时性,又要保证协议算法的轻量级特性。为解决上述矛盾问题,提出了一种轻量级的、能量有效的、基于无损聚合的层次分簇数据收集机制(QTBDC)。QTBDC首先对网络节点编码并在节点间建立起一个逻辑层次簇结构,然后利用各个子簇状态数据的相似性和编码的连续性,实现了网内无损聚合。该监测机制使得网络状态信息的收集在不丢失数据细节信息的情况下,数据通信量大大减少。经过仿真分析表明,该方法与现有经典数据收集方法相比,实现了节能,延长了网络的生命期。
Resumo:
Seismic wave field numerical modeling and seismic migration imaging based on wave equation have become useful and absolutely necessarily tools for imaging of complex geological objects. An important task for numerical modeling is to deal with the matrix exponential approximation in wave field extrapolation. For small value size matrix exponential, we can approximate the square root operator in exponential using different splitting algorithms. Splitting algorithms are usually used on the order or the dimension of one-way wave equation to reduce the complexity of the question. In this paper, we achieve approximate equation of 2-D Helmholtz operator inversion using multi-way splitting operation. Analysis on Gauss integral and coefficient of optimized partial fraction show that dispersion may accumulate by splitting algorithms for steep dipping imaging. High-order symplectic Pade approximation may deal with this problem, However, approximation of square root operator in exponential using splitting algorithm cannot solve dispersion problem during one-way wave field migration imaging. We try to implement exact approximation through eigenfunction expansion in matrix. Fast Fourier Transformation (FFT) method is selected because of its lowest computation. An 8-order Laplace matrix splitting is performed to achieve a assemblage of small matrixes using FFT method. Along with the introduction of Lie group and symplectic method into seismic wave-field extrapolation, accurate approximation of matrix exponential based on Lie group and symplectic method becomes the hot research field. To solve matrix exponential approximation problem, the Second-kind Coordinates (SKC) method and Generalized Polar Decompositions (GPD) method of Lie group are of choice. SKC method utilizes generalized Strang-splitting algorithm. While GPD method utilizes polar-type splitting and symmetric polar-type splitting algorithm. Comparing to Pade approximation, these two methods are less in computation, but they can both assure the Lie group structure. We think SKC and GPD methods are prospective and attractive in research and practice.
Resumo:
We present a thorough characterization of the access patterns in blogspace -- a fast-growing constituent of the content available through the Internet -- which comprises a rich interconnected web of blog postings and comments by an increasingly prominent user community that collectively define what has become known as the blogosphere. Our characterization of over 35 million read, write, and administrative requests spanning a 28-day period is done from three different blogosphere perspectives. The server view characterizes the aggregate access patterns of all users to all blogs; the user view characterizes how individual users interact with blogosphere objects (blogs); the object view characterizes how individual blogs are accessed. Our findings support two important conclusions. First, we show that the nature of interactions between users and objects is fundamentally different in blogspace than that observed in traditional web content. Access to objects in blogspace could be conceived as part of an interaction between an author and its readership. As we show in our work, such interactions range from one-to-many "broadcast-type" and many-to-one "registration-type" communication between an author and its readers, to multi-way, iterative "parlor-type" dialogues among members of an interest group. This more-interactive nature of the blogosphere leads to interesting traffic and communication patterns, which are different from those observed in traditional web content. Second, we identify and characterize novel features of the blogosphere workload, and we investigate the similarities and differences between typical web server workloads and blogosphere server workloads. Given the increasing share of blogspace traffic, understanding such differences is important for capacity planning and traffic engineering purposes, for example.