918 resultados para Bulk service queue with vacations to the server


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to verify the compliance of the supermarket sector with respect to the Good Practice Program standards of the city of Santa Maria (RS), Brazil. Sixty nine establishments were verified using a checklist of good practices for the supermarket sector in Santa Maria, RS (Brazil), from April to July 2011. The data were collected by a food safety and quality professional using this checklist. The results showed that the overall adequacy of the establishments surveyed was 29.07%. The highest percentage of compliance was found for storage at ambient temperature (64.13%). The lowest compliance percentage was also found in different sections and areas in the supermarkets such as bakery and confectionery (14.93%), water supply (18.30%), food handling (21.01%), sausage and cold meat (or deli meat) (36.38%), and documentation-related items (4.97%). None of the supermarkets evaluated had the necessary documentation for the implementation of good practices. The results of this study show the importance of effectively implementing a good practice program and quality systems by raising awareness among technicians and professionals of the importance of quality programs used in food companies and the need for more thorough inspection delivered by competent authorities to ensure food safety for consumers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The grades seven and eight physical education program of the Norfolk Board of Education was evaluated with respect to fitnesslevel improvement, an objective of the Ministry of Education for the province of Ontario. The Canada Fitness Award battery of fitness tests was used to measure fitness levels. It was established that in September the students were unfit, and in May they were fit. This indicated that the Norfolk physical education program was effective, with respect to the criterion used for this research. In addition, it was discovered that fitness-level improvement was significantly related to certain variables: teacher qualifications, teaching experience, school, and participation in extracurricular physical activity. Considering the results of the research, it was recommended that the Norfolk Board of Education hire young, qualified physical education teachers; create the position of Physical Education Consultant; and strive to create equitable resources for physical education instruction, in order that the school to which a student belongs no longer will be a determinant of fitness improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compositional random vectors are fundamental tools in the Bayesian analysis of categorical data. Many of the issues that are discussed with reference to the statistical analysis of compositional data have a natural counterpart in the construction of a Bayesian statistical model for categorical data. This note builds on the idea of cross-fertilization of the two areas recommended by Aitchison (1986) in his seminal book on compositional data. Particular emphasis is put on the problem of what parameterization to use

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a Nystr¨om/product integration method for a class of second kind integral equations on the real line which arise in problems of two-dimensional scalar and elastic wave scattering by unbounded surfaces. Stability and convergence of the method is established with convergence rates dependent on the smoothness of components of the kernel. The method is applied to the problem of acoustic scattering by a sound soft one-dimensional surface which is the graph of a function f, and superalgebraic convergence is established in the case when f is infinitely smooth. Numerical results are presented illustrating this behavior for the case when f is periodic (the diffraction grating case). The Nystr¨om method for this problem is stable and convergent uniformly with respect to the period of the grating, in contrast to standard integral equation methods for diffraction gratings which fail at a countable set of grating periods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The synoptic evolution and some meteorological impacts of the European winter storm Kyrill that swept across Western, Central, and Eastern Europe between 17 and 19 January 2007 are investigated. The intensity and large storm damage associated with Kyrill is explained based on synoptic and mesoscale environmental storm features, as well as on comparisons to previous storms. Kyrill appeared on weather maps over the US state of Arkansas about four days before it hit Europe. It underwent an explosive intensification over the Western North Atlantic Ocean while crossing a very intense zonal polar jet stream. A superposition of several favourable meteorological conditions west of the British Isles caused a further deepening of the storm when it started to affect Western Europe. Evidence is provided that a favourable alignment of three polar jet streaks and a dry air intrusion over the occlusion and cold fronts were causal factors in maintaining Kyrill's low pressure very far into Eastern Europe. Kyrill, like many other strong European winter storms, was embedded in a pre-existing, anomalously wide, north-south mean sea-level pressure (MSLP) gradient field. In addition to the range of gusts that might be expected from the synoptic-scale pressure field, mesoscale features associated with convective overturning at the cold front are suggested as the likely causes for the extremely damaging peak gusts observed at many lowland stations during the passage of Kyrill's cold front. Compared to other storms, Kyrill was by far not the most intense system in terms of core pressure and circulation anomaly. However, the system moved into a pre-existing strong MSLP gradient located over Central Europe which extended into Eastern Europe. This fact is considered determinant for the anomalously large area affected by Kyrill. Additionally, considerations of windiness in climate change simulations using two state-of-the-art regional climate models driven by ECHAM5 indicate that not only Central, but also Eastern Central Europe may be affected by higher surface wind speeds at the end of the 21st century. These changes are partially associated with the increased pressure gradient over Europe which is identified in the ECHAM5 simulations. Thus, with respect to the area affected, as well as to the synoptic and mesoscale storm features, it is proposed that Kyrill may serve as an interesting study case to assess future storm impacts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Access to fluoridated water is a known protective factor against dental caries. In 1974, fluoridation of the public water supply became mandatory by law in Brazil, resulting in improved coverage, especially in more developed regions of the country. Coverage increased across the country as a priority under the national oral health policy. This article systematizes information on the implementation and expansion of fluoridation in Sao Paulo State from 1956 to 2009, using secondary data from technical reports, official documents, and the Information System for Surveillance of Water Quality for Human Consumption (SISAGUA). In 2009, fluoridation covered 546 of 645 counties in Sao Paulo State (84.7%), reaching 85.1% of the total population and 93.5% of the population with access to the public water supply. The results indicate that fluoridation has been consolidated as part of State health policy. However, the challenge remains to implement and maintain fluoridation in 99 counties, benefiting 6.2 million inhabitants that are still excluded from this service.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A measurement of charged-particle distributions sensitive to the properties of the underlying event is presented for an inclusive sample of events containing a Z-boson, decaying to an electron or muon pair. The measurement is based on data collected using the ATLAS detector at the LHC in proton–proton collisions at a centre-of-mass energy of 7 TeV with an integrated luminosity of 4.6fb−1. Distributions of the charged particle multiplicity and of the charged particle transverse momentum are measured in regions of azimuthal angle defined with respect to the Z-boson direction. The measured distributions are compared to similar distributions measured in jet events, and to the predictions of various Monte Carlo generators implementing different underlying event models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El auge del "Internet de las Cosas" (IoT, "Internet of Things") y sus tecnologías asociadas han permitido su aplicación en diversos dominios de la aplicación, entre los que se encuentran la monitorización de ecosistemas forestales, la gestión de catástrofes y emergencias, la domótica, la automatización industrial, los servicios para ciudades inteligentes, la eficiencia energética de edificios, la detección de intrusos, la gestión de desastres y emergencias o la monitorización de señales corporales, entre muchas otras. La desventaja de una red IoT es que una vez desplegada, ésta queda desatendida, es decir queda sujeta, entre otras cosas, a condiciones climáticas cambiantes y expuestas a catástrofes naturales, fallos de software o hardware, o ataques maliciosos de terceros, por lo que se puede considerar que dichas redes son propensas a fallos. El principal requisito de los nodos constituyentes de una red IoT es que estos deben ser capaces de seguir funcionando a pesar de sufrir errores en el propio sistema. La capacidad de la red para recuperarse ante fallos internos y externos inesperados es lo que se conoce actualmente como "Resiliencia" de la red. Por tanto, a la hora de diseñar y desplegar aplicaciones o servicios para IoT, se espera que la red sea tolerante a fallos, que sea auto-configurable, auto-adaptable, auto-optimizable con respecto a nuevas condiciones que puedan aparecer durante su ejecución. Esto lleva al análisis de un problema fundamental en el estudio de las redes IoT, el problema de la "Conectividad". Se dice que una red está conectada si todo par de nodos en la red son capaces de encontrar al menos un camino de comunicación entre ambos. Sin embargo, la red puede desconectarse debido a varias razones, como que se agote la batería, que un nodo sea destruido, etc. Por tanto, se hace necesario gestionar la resiliencia de la red con el objeto de mantener la conectividad entre sus nodos, de tal manera que cada nodo IoT sea capaz de proveer servicios continuos, a otros nodos, a otras redes o, a otros servicios y aplicaciones. En este contexto, el objetivo principal de esta tesis doctoral se centra en el estudio del problema de conectividad IoT, más concretamente en el desarrollo de modelos para el análisis y gestión de la Resiliencia, llevado a la práctica a través de las redes WSN, con el fin de mejorar la capacidad la tolerancia a fallos de los nodos que componen la red. Este reto se aborda teniendo en cuenta dos enfoques distintos, por una parte, a diferencia de otro tipo de redes de dispositivos convencionales, los nodos en una red IoT son propensos a perder la conexión, debido a que se despliegan en entornos aislados, o en entornos con condiciones extremas; por otra parte, los nodos suelen ser recursos con bajas capacidades en términos de procesamiento, almacenamiento y batería, entre otros, por lo que requiere que el diseño de la gestión de su resiliencia sea ligero, distribuido y energéticamente eficiente. En este sentido, esta tesis desarrolla técnicas auto-adaptativas que permiten a una red IoT, desde la perspectiva del control de su topología, ser resiliente ante fallos en sus nodos. Para ello, se utilizan técnicas basadas en lógica difusa y técnicas de control proporcional, integral y derivativa (PID - "proportional-integral-derivative"), con el objeto de mejorar la conectividad de la red, teniendo en cuenta que el consumo de energía debe preservarse tanto como sea posible. De igual manera, se ha tenido en cuenta que el algoritmo de control debe ser distribuido debido a que, en general, los enfoques centralizados no suelen ser factibles a despliegues a gran escala. El presente trabajo de tesis implica varios retos que conciernen a la conectividad de red, entre los que se incluyen: la creación y el análisis de modelos matemáticos que describan la red, una propuesta de sistema de control auto-adaptativo en respuesta a fallos en los nodos, la optimización de los parámetros del sistema de control, la validación mediante una implementación siguiendo un enfoque de ingeniería del software y finalmente la evaluación en una aplicación real. Atendiendo a los retos anteriormente mencionados, el presente trabajo justifica, mediante una análisis matemático, la relación existente entre el "grado de un nodo" (definido como el número de nodos en la vecindad del nodo en cuestión) y la conectividad de la red, y prueba la eficacia de varios tipos de controladores que permiten ajustar la potencia de trasmisión de los nodos de red en respuesta a eventuales fallos, teniendo en cuenta el consumo de energía como parte de los objetivos de control. Así mismo, este trabajo realiza una evaluación y comparación con otros algoritmos representativos; en donde se demuestra que el enfoque desarrollado es más tolerante a fallos aleatorios en los nodos de la red, así como en su eficiencia energética. Adicionalmente, el uso de algoritmos bioinspirados ha permitido la optimización de los parámetros de control de redes dinámicas de gran tamaño. Con respecto a la implementación en un sistema real, se han integrado las propuestas de esta tesis en un modelo de programación OSGi ("Open Services Gateway Initiative") con el objeto de crear un middleware auto-adaptativo que mejore la gestión de la resiliencia, especialmente la reconfiguración en tiempo de ejecución de componentes software cuando se ha producido un fallo. Como conclusión, los resultados de esta tesis doctoral contribuyen a la investigación teórica y, a la aplicación práctica del control resiliente de la topología en redes distribuidas de gran tamaño. Los diseños y algoritmos presentados pueden ser vistos como una prueba novedosa de algunas técnicas para la próxima era de IoT. A continuación, se enuncian de forma resumida las principales contribuciones de esta tesis: (1) Se han analizado matemáticamente propiedades relacionadas con la conectividad de la red. Se estudia, por ejemplo, cómo varía la probabilidad de conexión de la red al modificar el alcance de comunicación de los nodos, así como cuál es el mínimo número de nodos que hay que añadir al sistema desconectado para su re-conexión. (2) Se han propuesto sistemas de control basados en lógica difusa para alcanzar el grado de los nodos deseado, manteniendo la conectividad completa de la red. Se han evaluado diferentes tipos de controladores basados en lógica difusa mediante simulaciones, y los resultados se han comparado con otros algoritmos representativos. (3) Se ha investigado más a fondo, dando un enfoque más simple y aplicable, el sistema de control de doble bucle, y sus parámetros de control se han optimizado empleando algoritmos heurísticos como el método de la entropía cruzada (CE, "Cross Entropy"), la optimización por enjambre de partículas (PSO, "Particle Swarm Optimization"), y la evolución diferencial (DE, "Differential Evolution"). (4) Se han evaluado mediante simulación, la mayoría de los diseños aquí presentados; además, parte de los trabajos se han implementado y validado en una aplicación real combinando técnicas de software auto-adaptativo, como por ejemplo las de una arquitectura orientada a servicios (SOA, "Service-Oriented Architecture"). ABSTRACT The advent of the Internet of Things (IoT) enables a tremendous number of applications, such as forest monitoring, disaster management, home automation, factory automation, smart city, etc. However, various kinds of unexpected disturbances may cause node failure in the IoT, for example battery depletion, software/hardware malfunction issues and malicious attacks. So, it can be considered that the IoT is prone to failure. The ability of the network to recover from unexpected internal and external failures is known as "resilience" of the network. Resilience usually serves as an important non-functional requirement when designing IoT, which can further be broken down into "self-*" properties, such as self-adaptive, self-healing, self-configuring, self-optimization, etc. One of the consequences that node failure brings to the IoT is that some nodes may be disconnected from others, such that they are not capable of providing continuous services for other nodes, networks, and applications. In this sense, the main objective of this dissertation focuses on the IoT connectivity problem. A network is regarded as connected if any pair of different nodes can communicate with each other either directly or via a limited number of intermediate nodes. More specifically, this thesis focuses on the development of models for analysis and management of resilience, implemented through the Wireless Sensor Networks (WSNs), which is a challenging task. On the one hand, unlike other conventional network devices, nodes in the IoT are more likely to be disconnected from each other due to their deployment in a hostile or isolated environment. On the other hand, nodes are resource-constrained in terms of limited processing capability, storage and battery capacity, which requires that the design of the resilience management for IoT has to be lightweight, distributed and energy-efficient. In this context, the thesis presents self-adaptive techniques for IoT, with the aim of making the IoT resilient against node failures from the network topology control point of view. The fuzzy-logic and proportional-integral-derivative (PID) control techniques are leveraged to improve the network connectivity of the IoT in response to node failures, meanwhile taking into consideration that energy consumption must be preserved as much as possible. The control algorithm itself is designed to be distributed, because the centralized approaches are usually not feasible in large scale IoT deployments. The thesis involves various aspects concerning network connectivity, including: creation and analysis of mathematical models describing the network, proposing self-adaptive control systems in response to node failures, control system parameter optimization, implementation using the software engineering approach, and evaluation in a real application. This thesis also justifies the relations between the "node degree" (the number of neighbor(s) of a node) and network connectivity through mathematic analysis, and proves the effectiveness of various types of controllers that can adjust power transmission of the IoT nodes in response to node failures. The controllers also take into consideration the energy consumption as part of the control goals. The evaluation is performed and comparison is made with other representative algorithms. The simulation results show that the proposals in this thesis can tolerate more random node failures and save more energy when compared with those representative algorithms. Additionally, the simulations demonstrate that the use of the bio-inspired algorithms allows optimizing the parameters of the controller. With respect to the implementation in a real system, the programming model called OSGi (Open Service Gateway Initiative) is integrated with the proposals in order to create a self-adaptive middleware, especially reconfiguring the software components at runtime when failures occur. The outcomes of this thesis contribute to theoretic research and practical applications of resilient topology control for large and distributed networks. The presented controller designs and optimization algorithms can be viewed as novel trials of the control and optimization techniques for the coming era of the IoT. The contributions of this thesis can be summarized as follows: (1) Mathematically, the fault-tolerant probability of a large-scale stochastic network is analyzed. It is studied how the probability of network connectivity depends on the communication range of the nodes, and what is the minimum number of neighbors to be added for network re-connection. (2) A fuzzy-logic control system is proposed, which obtains the desired node degree and in turn maintains the network connectivity when it is subject to node failures. There are different types of fuzzy-logic controllers evaluated by simulations, and the results demonstrate the improvement of fault-tolerant capability as compared to some other representative algorithms. (3) A simpler but more applicable approach, the two-loop control system is further investigated, and its control parameters are optimized by using some heuristic algorithms such as Cross Entropy (CE), Particle Swarm Optimization (PSO), and Differential Evolution (DE). (4) Most of the designs are evaluated by means of simulations, but part of the proposals are implemented and tested in a real-world application by combining the self-adaptive software technique and the control algorithms which are presented in this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Included at end is another report with title: Special committee for promoting an enquiry by a Royal Commission or Select Committee of the House of Lords, with regard to the financial and general management and common organisation of medical charities in the metropolis. Interim report, June 1890. London : Charity Organisation Society, 1890. (15 p.).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The European Union institutions represent a complex setting and a specific case of institutional translation. The European Central Bank (ECB) is a particular context as the documents translated belong to the field of economics and, thus, contain many specialised terms and neologisms that pose challenges to translators. This study aims to investigate the translation practices at the ECB, and to analyse their effects on the translated texts. In order to illustrate the way texts are translated at the ECB, the thesis will focus on metaphorical expressions and the conceptual metaphors by which they are sanctioned. Metaphor is often associated with literature and less with specialised texts. However, according to Lakoff and Johnson’s (1980) conceptual metaphor theory, our conceptual system is fundamentally metaphorical in nature and metaphors are pervasive elements of thought and speech. The corpus compiled comprises economic documents translated at the ECB, mainly from English into Romanian. Using corpus analysis, the most salient metaphorical expressions were identified in the source and target texts and explained with reference to the main conceptual metaphors. Translation strategies are discussed on the basis of a comparison of the source and target texts. The text-based analysis is complemented by questionnaires distributed to translators, which give insights into the institution’s translation practices. As translation is an institutional process, translators have to follow certain guidelines and practices; these are discussed with reference to translators’ agency. A gap was identified in the field of institutional translation. The translation process in the EU institutions has been insufficiently explored, especially regarding the new languages of the European Union. By combining the analysis of the institutional practices, the texts produced in the institution and the translators’ work (by the questionnaires distributed to translators), this thesis intends to bring a contribution to institutional translation and metaphor translation, particularly regarding a new EU language, Romanian.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study a climatologically important interaction of two of the main components of the geophysical system by adding an energy balance model for the averaged atmospheric temperature as dynamic boundary condition to a diagnostic ocean model having an additional spatial dimension. In this work, we give deeper insight than previous papers in the literature, mainly with respect to the 1990 pioneering model by Watts and Morantine. We are taking into consideration the latent heat for the two phase ocean as well as a possible delayed term. Non-uniqueness for the initial boundary value problem, uniqueness under a non-degeneracy condition and the existence of multiple stationary solutions are proved here. These multiplicity results suggest that an S-shaped bifurcation diagram should be expected to occur in this class of models generalizing previous energy balance models. The numerical method applied to the model is based on a finite volume scheme with nonlinear weighted essentially non-oscillatory reconstruction and Runge–Kutta total variation diminishing for time integration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Little is known with respect to the metabolic response and the requirements of infected newborns. Moreover, the nutritional needs and particularly the energy metabolism of newborns with sepsis are controversial matter. In this investigation we aimed to evaluate the rest energy expenditure (REE) of newborns with bacterial sepsis during the acute and the recovery phases. Methods: We studied nineteen neonates (27.3 +/- 17.2 days old) with bacterial sepsis during the acute phase and recovery of their illness. REE was determined by indirect calorimetry and VO(2) and VCO(2) measured by gas chromatography. Results: REE significantly increased from 49.4 +/- 13.1 kcal/kg/day during the acute to 68.3 +/- 10.9 kcal/kg/day during recovery phase of sepsis (P < 0.01). Similarly, VO(2) (7.4 +/- 1.9 vs 10 +/- 1.5 ml/kg/min) and VCO(2) (5.1 +/- 1.7 vs 7.4 +/- 1.5 ml/kg/min) were also increased during the course of the disease (P < 0.01). Conclusion: REE was increased during recovery compared to the sepsis phase. REE of septic newborns should be calculated on individualized basis, bearing in mind their metabolic capabilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Cluster Variation Method (CVM), introduced over 50 years ago by Prof. Dr. Ryoichi Kikuchi, is applied to the thermodynamic modeling of the BCC Cr-Fe system in the irregular tetrahedron approximation, using experimental thermochemical data as initial input for accessing the model parameters. The results are checked against independent data on the low-temperature miscibility gap, using increasingly accurate thermodynamic models, first by the inclusion of the magnetic degrees of freedom of iron and then also by the inclusion of the magnetic degrees of freedom of chromium. It is shown that a reasonably accurate description of the phase diagram at the iron-rich side (i.e. the miscibility gap borders and the Curie line) is obtained, but only at expense of the agreement with the above mentioned thermochemical data. Reasons for these inconsistencies are discussed, especially with regard to the need of introducing vibrational degrees of freedom in the CVM model. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the improvement of genetic material suitable for on farm use under low-input conditions, participatory and formal plant breeding strategies are frequently presented as competing options. A common frame of reference to phrase mechanisms and purposes related to breeding strategies will facilitate clearer descriptions of similarities and differences between participatory plant breeding and formal plant breeding. In this paper an attempt is made to develop such a common framework by means of a statistically inspired language that acknowledges the importance of both on farm trials and research centre trials as sources of information for on farm genetic improvement. Key concepts are the genetic correlation between environments, and the heterogeneity of phenotypic and genetic variance over environments. Classic selection response theory is taken as the starting point for the comparison of selection trials (on farm and research centre) with respect to the expected genetic improvement in a target environment (low-input farms). The variance-covariance parameters that form the input for selection response comparisons traditionally come from a mixed model fit to multi-environment trial data. In this paper we propose a recently developed class of mixed models, namely multiplicative mixed models, also called factor-analytic models, for modelling genetic variances and covariances (correlations). Mixed multiplicative models allow genetic variances and covariances to be dependent on quantitative descriptors of the environment, and confer a high flexibility in the choice of variance-covariance structure, without requiring the estimation of a prohibitively high number of parameters. As a result detailed considerations regarding selection response comparisons are facilitated. ne statistical machinery involved is illustrated on an example data set consisting of barley trials from the International Center for Agricultural Research in the Dry Areas (ICARDA). Analysis of the example data showed that participatory plant breeding and formal plant breeding are better interpreted as providing complementary rather than competing information.