783 resultados para Reliability allocation
Resumo:
Next generation electronic devices have to guarantee high performance while being less power-consuming and highly reliable for several application domains ranging from the entertainment to the business. In this context, multicore platforms have proven the most efficient design choice but new challenges have to be faced. The ever-increasing miniaturization of the components produces unexpected variations on technological parameters and wear-out characterized by soft and hard errors. Even though hardware techniques, which lend themselves to be applied at design time, have been studied with the objective to mitigate these effects, they are not sufficient; thus software adaptive techniques are necessary. In this thesis we focus on multicore task allocation strategies to minimize the energy consumption while meeting performance constraints. We firstly devise a technique based on an Integer Linear Problem formulation which provides the optimal solution but cannot be applied on-line since the algorithm it needs is time-demanding; then we propose a sub-optimal technique based on two steps which can be applied on-line. We demonstrate the effectiveness of the latter solution through an exhaustive comparison against the optimal solution, state-of-the-art policies, and variability-agnostic task allocations by running multimedia applications on the virtual prototype of a next generation industrial multicore platform. We also face the problem of the performance and lifetime degradation. We firstly focus on embedded multicore platforms and propose an idleness distribution policy that increases core expected lifetimes by duty cycling their activity; then, we investigate the use of micro thermoelectrical coolers in general-purpose multicore processors to control the temperature of the cores at runtime with the objective of meeting lifetime constraints without performance loss.
Resumo:
A patient classification system was developed integrating a patient acuity instrument with a computerized nursing distribution method based on a linear programming model. The system was designed for real-time measurement of patient acuity (workload) and allocation of nursing personnel to optimize the utilization of resources.^ The acuity instrument was a prototype tool with eight categories of patients defined by patient severity and nursing intensity parameters. From this tool, the demand for nursing care was defined in patient points with one point equal to one hour of RN time. Validity and reliability of the instrument was determined as follows: (1) Content validity by a panel of expert nurses; (2) predictive validity through a paired t-test analysis of preshift and postshift categorization of patients; (3) initial reliability by a one month pilot of the instrument in a practice setting; and (4) interrater reliability by the Kappa statistic.^ The nursing distribution system was a linear programming model using a branch and bound technique for obtaining integer solutions. The objective function was to minimize the total number of nursing personnel used by optimally assigning the staff to meet the acuity needs of the units. A penalty weight was used as a coefficient of the objective function variables to define priorities for allocation of staff.^ The demand constraints were requirements to meet the total acuity points needed for each unit and to have a minimum number of RNs on each unit. Supply constraints were: (1) total availability of each type of staff and the value of that staff member (value was determined relative to that type of staff's ability to perform the job function of an RN (i.e., value for eight hours RN = 8 points, LVN = 6 points); (2) number of personnel available for floating between units.^ The capability of the model to assign staff quantitatively and qualitatively equal to the manual method was established by a thirty day comparison. Sensitivity testing demonstrated appropriate adjustment of the optimal solution to changes in penalty coefficients in the objective function and to acuity totals in the demand constraints.^ Further investigation of the model documented: correct adjustment of assignments in response to staff value changes; and cost minimization by an addition of a dollar coefficient to the objective function. ^
Resumo:
Debido al gran incremento de datos digitales que ha tenido lugar en los últimos años, ha surgido un nuevo paradigma de computación paralela para el procesamiento eficiente de grandes volúmenes de datos. Muchos de los sistemas basados en este paradigma, también llamados sistemas de computación intensiva de datos, siguen el modelo de programación de Google MapReduce. La principal ventaja de los sistemas MapReduce es que se basan en la idea de enviar la computación donde residen los datos, tratando de proporcionar escalabilidad y eficiencia. En escenarios libres de fallo, estos sistemas generalmente logran buenos resultados. Sin embargo, la mayoría de escenarios donde se utilizan, se caracterizan por la existencia de fallos. Por tanto, estas plataformas suelen incorporar características de tolerancia a fallos y fiabilidad. Por otro lado, es reconocido que las mejoras en confiabilidad vienen asociadas a costes adicionales en recursos. Esto es razonable y los proveedores que ofrecen este tipo de infraestructuras son conscientes de ello. No obstante, no todos los enfoques proporcionan la misma solución de compromiso entre las capacidades de tolerancia a fallo (o de manera general, las capacidades de fiabilidad) y su coste. Esta tesis ha tratado la problemática de la coexistencia entre fiabilidad y eficiencia de los recursos en los sistemas basados en el paradigma MapReduce, a través de metodologías que introducen el mínimo coste, garantizando un nivel adecuado de fiabilidad. Para lograr esto, se ha propuesto: (i) la formalización de una abstracción de detección de fallos; (ii) una solución alternativa a los puntos únicos de fallo de estas plataformas, y, finalmente, (iii) un nuevo sistema de asignación de recursos basado en retroalimentación a nivel de contenedores. Estas contribuciones genéricas han sido evaluadas tomando como referencia la arquitectura Hadoop YARN, que, hoy en día, es la plataforma de referencia en la comunidad de los sistemas de computación intensiva de datos. En la tesis se demuestra cómo todas las contribuciones de la misma superan a Hadoop YARN tanto en fiabilidad como en eficiencia de los recursos utilizados. ABSTRACT Due to the increase of huge data volumes, a new parallel computing paradigm to process big data in an efficient way has arisen. Many of these systems, called dataintensive computing systems, follow the Google MapReduce programming model. The main advantage of these systems is based on the idea of sending the computation where the data resides, trying to provide scalability and efficiency. In failure-free scenarios, these frameworks usually achieve good results. However, these ones are not realistic scenarios. Consequently, these frameworks exhibit some fault tolerance and dependability techniques as built-in features. On the other hand, dependability improvements are known to imply additional resource costs. This is reasonable and providers offering these infrastructures are aware of this. Nevertheless, not all the approaches provide the same tradeoff between fault tolerant capabilities (or more generally, reliability capabilities) and cost. In this thesis, we have addressed the coexistence between reliability and resource efficiency in MapReduce-based systems, looking for methodologies that introduce the minimal cost and guarantee an appropriate level of reliability. In order to achieve this, we have proposed: (i) a formalization of a failure detector abstraction; (ii) an alternative solution to single points of failure of these frameworks, and finally (iii) a novel feedback-based resource allocation system at the container level. Finally, our generic contributions have been instantiated for the Hadoop YARN architecture, which is the state-of-the-art framework in the data-intensive computing systems community nowadays. The thesis demonstrates how all our approaches outperform Hadoop YARN in terms of reliability and resource efficiency.
Resumo:
In this article, the results achieved by applying an electromagnetism (EM) inspired metaheuristic to the uncapacitated multiple allocation hub location problem (UMAHLP) are discussed. An appropriate objective function which natively conform with the problem, 1-swap local search and scaling technique conduce to good overall performance.Computational tests demonstrate the reliability of this method, since the EM-inspired metaheuristic reaches all optimal/best known solutions for UMAHLP, except one, in a reasonable time.
Resumo:
In the deregulated Power markets it is necessary to have a appropriate Transmission Pricing methodology that also takes into account “Congestion and Reliability”, in order to ensure an economically viable, equitable, and congestion free power transfer capability, with high reliability and security. This thesis presents results of research conducted on the development of a Decision Making Framework (DMF) of concepts and data analytic and modelling methods for the Reliability benefits Reflective Optimal “cost evaluation for the calculation of Transmission Cost” for composite power systems, using probabilistic methods. The methodology within the DMF devised and reported in this thesis, utilises a full AC Newton-Raphson load flow and a Monte-Carlo approach to determine, Reliability Indices which are then used for the proposed Meta-Analytical Probabilistic Approach (MAPA) for the evaluation and calculation of the Reliability benefit Reflective Optimal Transmission Cost (ROTC), of a transmission system. This DMF includes methods for transmission line embedded cost allocation among transmission transactions, accounting for line capacity-use as well as congestion costing that can be used for pricing using application of Power Transfer Distribution Factor (PTDF) as well as Bialek’s method to determine a methodology which consists of a series of methods and procedures as explained in detail in the thesis for the proposed MAPA for ROTC. The MAPA utilises the Bus Data, Generator Data, Line Data, Reliability Data and Customer Damage Function (CDF) Data for the evaluation of Congestion, Transmission and Reliability costing studies using proposed application of PTDF and other established/proven methods which are then compared, analysed and selected according to the area/state requirements and then integrated to develop ROTC. Case studies involving standard 7-Bus, IEEE 30-Bus and 146-Bus Indian utility test systems are conducted and reported throughout in the relevant sections of the dissertation. There are close correlation between results obtained through proposed application of PTDF method with the Bialek’s and different MW-Mile methods. The novel contributions of this research work are: firstly the application of PTDF method developed for determination of Transmission and Congestion costing, which are further compared with other proved methods. The viability of developed method is explained in the methodology, discussion and conclusion chapters. Secondly the development of comprehensive DMF which helps the decision makers to analyse and decide the selection of a costing approaches according to their requirements. As in the DMF all the costing approaches have been integrated to achieve ROTC. Thirdly the composite methodology for calculating ROTC has been formed into suits of algorithms and MATLAB programs for each part of the DMF, which are further described in the methodology section. Finally the dissertation concludes with suggestions for Future work.
Resumo:
The Mara River Basin (MRB) is endowed with pristine biodiversity, socio-cultural heritage and natural resources. The purpose of my study is to develop and apply an integrated water resource allocation framework for the MRB based on the hydrological processes, water demand and economic factors. The basin was partitioned into twelve sub-basins and the rainfall runoff processes was modeled using the Soil and Water Assessment Tool (SWAT) after satisfactory Nash-Sutcliff efficiency of 0.68 for calibration and 0.43 for validation at Mara Mines station. The impact and uncertainty of climate change on the hydrology of the MRB was assessed using SWAT and three scenarios of statistically downscaled outputs from twenty Global Circulation Models. Results predicted the wet season getting more wet and the dry season getting drier, with a general increasing trend of annual rainfall through 2050. Three blocks of water demand (environmental, normal and flood) were estimated from consumptive water use by human, wildlife, livestock, tourism, irrigation and industry. Water demand projections suggest human consumption is expected to surpass irrigation as the highest water demand sector by 2030. Monthly volume of water was estimated in three blocks of current minimum reliability, reserve (>95%), normal (80–95%) and flood (40%) for more than 5 months in a year. The assessment of water price and marginal productivity showed that current water use hardly responds to a change in price or productivity of water. Finally, a water allocation model was developed and applied to investigate the optimum monthly allocation among sectors and sub-basins by maximizing the use value and hydrological reliability of water. Model results demonstrated that the status on reserve and normal volumes can be improved to ‘low’ or ‘moderate’ by updating the existing reliability to meet prevailing demand. Flow volumes and rates for four scenarios of reliability were presented. Results showed that the water allocation framework can be used as comprehensive tool in the management of MRB, and possibly be extended similar watersheds.
Resumo:
to assess the construct validity and reliability of the Pediatric Patient Classification Instrument. correlation study developed at a teaching hospital. The classification involved 227 patients, using the pediatric patient classification instrument. The construct validity was assessed through the factor analysis approach and reliability through internal consistency. the Exploratory Factor Analysis identified three constructs with 67.5% of variance explanation and, in the reliability assessment, the following Cronbach's alpha coefficients were found: 0.92 for the instrument as a whole; 0.88 for the Patient domain; 0.81 for the Family domain; 0.44 for the Therapeutic procedures domain. the instrument evidenced its construct validity and reliability, and these analyses indicate the feasibility of the instrument. The validation of the Pediatric Patient Classification Instrument still represents a challenge, due to its relevance for a closer look at pediatric nursing care and management. Further research should be considered to explore its dimensionality and content validity.
Resumo:
Secondary caries has been reported as the main reason for restoration replacement. The aim of this in vitro study was to evaluate the performance of different methods - visual inspection, laser fluorescence (DIAGNOdent), radiography and tactile examination - for secondary caries detection in primary molars restored with amalgam. Fifty-four primary molars were photographed and 73 suspect sites adjacent to amalgam restorations were selected. Two examiners evaluated independently these sites using all methods. Agreement between examiners was assessed by the Kappa test. To validate the methods, a caries-detector dye was used after restoration removal. The best cut-off points for the sample were found by a Receiver Operator Characteristic (ROC) analysis, and the area under the ROC curve (Az), and the sensitivity, specificity and accuracy of the methods were calculated for enamel (D2) and dentine (D3) thresholds. These parameters were found for each method and then compared by the McNemar test. The tactile examination and visual inspection presented the highest inter-examiner agreement for the D2 and D3 thresholds, respectively. The visual inspection also showed better performance than the other methods for both thresholds (Az = 0.861 and Az = 0.841, respectively). In conclusion, the visual inspection presented the best performance for detecting enamel and dentin secondary caries in primary teeth restored with amalgam.
Resumo:
OBJECTIVE: This in situ study evaluated the discriminatory power and reliability of methods of dental plaque quantification and the relationship between visual indices (VI) and fluorescence camera (FC) to detect plaque. MATERIAL AND METHODS: Six volunteers used palatal appliances with six bovine enamel blocks presenting different stages of plaque accumulation. The presence of plaque with and without disclosing was assessed using VI. Images were obtained with FC and digital camera in both conditions. The area covered by plaque was assessed. Examinations were done by two independent examiners. Data were analyzed by Kruskal-Wallis and Kappa tests to compare different conditions of samples and to assess the inter-examiner reproducibility. RESULTS: Some methods presented adequate reproducibility. The Turesky index and the assessment of area covered by disclosed plaque in the FC images presented the highest discriminatory powers. CONCLUSION: The Turesky index and images with FC with disclosing present good reliability and discriminatory power in quantifying dental plaque.
Resumo:
OBJECTIVE: The aim of this study was to translate the Structured Clinical Interview for Mood Spectrum into Brazilian Portuguese, measuring its reliability, validity, and defining scores for bipolar disorders. METHOD: Questionnaire was translated (into Brazilian Portuguese) and back-translated into English. Sample consisted of 47 subjects with bipolar disorder, 47 with major depressive disorder, 18 with schizophrenia and 22 controls. Inter-rater reliability was tested in 20 subjects with bipolar disorder and MDD. Internal consistency was measured using the Kuder Richardson formula. Forward stepwise discriminant analysis was performed. Scores were compared between groups; manic (M), depressive (D) and total (T) threshold scores were calculated through receiver operating characteristic (ROC) curves. RESULTS: Kuder Richardson coefficients were between 0.86 and 0.94. Intraclass correlation coefficient was 0.96 (CI 95 % 0.93-0.97). Subjects with bipolar disorder had higher M and T, and similar D scores, when compared to major depressive disorder (ANOVA, p < 0.001). The sub-domains that best discriminated unipolar and bipolar subjects were manic energy and manic mood. M had the best area under the curve (0.909), and values of M equal to or greater than 30 yielded 91.5% sensitivity and 74.5% specificity. CONCLUSION: Structured Clinical Interview for Mood Spectrum has good reliability and validity. Cut-off of 30 best differentiates subjects with bipolar disorder vs. unipolar depression. A cutoff score of 30 or higher in the mania sub-domain is appropriate to help make a distinction between subjects with bipolar disorder and those with unipolar depression.
Resumo:
The aim of this study was to determine the reproducibility, reliability and validity of measurements in digital models compared to plaster models. Fifteen pairs of plaster models were obtained from orthodontic patients with permanent dentition before treatment. These were digitized to be evaluated with the program Cécile3 v2.554.2 beta. Two examiners measured three times the mesiodistal width of all the teeth present, intercanine, interpremolar and intermolar distances, overjet and overbite. The plaster models were measured using a digital vernier. The t-Student test for paired samples and interclass correlation coefficient (ICC) were used for statistical analysis. The ICC of the digital models were 0.84 ± 0.15 (intra-examiner) and 0.80 ± 0.19 (inter-examiner). The average mean difference of the digital models was 0.23 ± 0.14 and 0.24 ± 0.11 for each examiner, respectively. When the two types of measurements were compared, the values obtained from the digital models were lower than those obtained from the plaster models (p < 0.05), although the differences were considered clinically insignificant (differences < 0.1 mm). The Cécile digital models are a clinically acceptable alternative for use in Orthodontics.
Resumo:
The aim of this study was to translate, validate and verify the reliability of the Body Area Scale (BAS). Participants were 386 teenagers, enrolled in a private school. Translation into Portuguese was conducted. The instrument was evaluated for internal consistency and construct validation analysis. Reproducibility was evaluated using the Wilcoxon test and the coefficient of interclass correlation. The BAS demonstrated good values for internal consistency (0.90 and 0.88) and was able to discriminate boys and girls according to nutritional state (p = 0.020 and p = 0.026, respectively). BAS scores correlated with adolescents' BMI (r = 0.14, p = 0.055; r = 0.23, p = 0.001) and WC (r =0.13, p = 0.083; r = 0.22, 0.002). Reliability was confirmed by the coefficient of inter-class correlation (0.35, p < 0.001; 0.60, p < 0.001) for boys and girls, respectively. The instrument performed well in terms of understanding and time of completion. BAS was successfully translated into Portuguese and presented good validity when applied to adolescents.
Resumo:
OBJECTIVES: to produce evidence of the validity and reliability of the Body Shape Questionnaire (BSQ) - a tool for measuring an individual's attitude towards his or her body image. METHODS: the study covered 386 young people of both sexes aged between 10 and 18 from a private school and used self-applied questionnaires and anthropometric evaluation. It evaluated the internal consistency, the discriminant validity for differences from the means, according to nutritional status (underweight, eutrophic, overweight and obese), the concurrent validity by way of Spearman's correlation coefficient between the scale and the Body Mass Index (BMI), the waist-hip circumference ratio (WHR) and the waist circumference (WC). Reliability was tested using Wilcoxon's Test, the intraclass correlation coefficient and the Bland-Altman figures. RESULTS: the BSQ displayed good internal consistency (±=0.96) and was capable of discriminating among the total population, boys and girls, according to nutritional status (p<0.001). It correlated with the BMI (r=0.41; p<0.001), WHR (r=-0.10; p=0.043) and WC (r=0.24; p<0.001) and its reliability was confirmed by intraclass correlation (r=0.91; p<0.001) for the total population. The questionnaire was easy to understand and could be completed quickly. CONCLUSIONS: the BSQ presented good results, thereby providing evidence of its validity and reliability. It is therefore recommended for evaluation of body image attitudes among adolescents.
Resumo:
OBJETIVO: Avaliar a validade do peso, estatura e Índice de Massa Corporal (IMC) referidos e sua confiabilidade para o diagnóstico do estado nutricional de adolescentes de Piracicaba. MÉTODOS: Participaram do estudo 360 adolescentes de ambos os sexos, de escolas públicas de Piracicaba, com idade entre 10 e 15 anos. Os adolescentes auto-relataram seu peso e estatura, sendo esses valores obtidos por medidas diretas, logo em seguida, pelos entrevistadores. A validade do IMC referido foi calculada segundo índices de sensibilidade, especificidade e valor preditivo positivo (VPP). Avaliou-se a concordância entre as categorias de IMC obtido por meio das medidas referidas e aferidas a partir do coeficiente kappa ponderado, coeficiente de correlação de Lin. e gráficos de Bland e Altman e Lin. RESULTADOS: Verificou-se que tanto os meninos quanto as meninas subestimaram o peso (-1,0 meninas e meninos) e a estatura (meninas -1,2 e meninos -0,8) (p < 0,001). Os valores de IMC aferidos e referidos apresentaram uma concordância moderada. A sensibilidade do IMC referido para classificar os indivíduos obesos foi maior para os meninos (87,5%), enquanto a especificidade foi maior para as meninas (92,7%). O VPP foi elevado somente para a classificação da eutrofia. CONCLUSÕES: As medidas referidas de peso e estatura de adolescentes não representam medidas válidas e, portanto, não devem ser usadas em substituição aos valores mensurados. Além disso, verificou-se que 10% dos meninos obesos e 40% das meninas obesas poderiam permanecer não identificados utilizando-se as medidas auto-referidas, confirmando a baixa validade das medidas auto-referidas.
Resumo:
In some circumstances ice floes may be modeled as beams. In general this modeling supposes constant thickness, which contradicts field observations. Action of currents, wind and the sequence of contacts, causes thickness to vary. Here this effect is taken into consideration on the modeling of the behavior of ice hitting inclined walls of offshore platforms. For this purpose, the boundary value problem is first equated. The set of equations so obtained is then transformed into a system of equations, that is then solved numerically. For this sake an implicit solution is developed, using a shooting method, with the accompanying Jacobian. In-plane coupling and the dependency of the boundary terms on deformation, make the problem non-linear and the development particular. Deformation and internal resultants are then computed for harmonic forms of beam profile. Forms of giving some additional generality to the problem are discussed.