974 resultados para Linear decision rules


Relevância:

80.00% 80.00%

Publicador:

Resumo:

O tema central deste trabalho é o Planejamento, Programação e Controle da Produção na indústria, com o auxílio de uma ferramenta computacional, do tipo Finite Capacity Schedule (FCS). No Brasil, essa categoria de software é denominada, genericamente, por Sistemas de Planejamento Fino de Produção ou de Capacidade Finita. Alinhado com as tendências mundiais e a vantagem de menores investimentos em hardware, o sistema escolhido é compatível com a operação em microcomputadores. Na primeira parte do trabalho, o assunto é tratado de forma geral, quando se pretende caraterizar amplamente o problema da programação da produção, as dificuldades na sua execução, as soluções existentes e suas limitações. A segunda parte do trabalho discute, detalhadamente, os métodos tradicionais de planejamento de materiais e capacidade. A revisão bibliográfica se encerra com uma apresentação dos sistemas FCS e sua classificação. A terceira parte trata da descrição, ensaios e avaliação da programação gerada por um software de Planejamento Fino de Produção determinístico, baseado na lógica de simulação computacional com regras de decisão. Embora a avaliação esteja limitada ao software utilizado, a análise ainda vai procurar identificar as diferenças fundamentais entre os resultados da programação de Capacidade Finita e a convencional, representada pelos sistemas da categoria MRPII ou Planejamento dos Recursos de Manufatura (Manufacturing Resources Planning). As lógicas dos sistemas MRPII e de Capacidade Finita são discutidas na revisão bibliográfica, enquanto que, para o software empregado no trabalho, ainda há um capítulo específico tratando da sua descrição, fundamentos, software house, hardware necessário e outras informações relevantes. Os ensaios serão implementados com o objetivo de analisar o sistema FCS como ferramenta de planejamento e de programação de produção. No caso, uma fração de um processo produtivo será modelada no sistema, através do qual serão gerados planos de produção que serão confrontados com a programação usual e com o comportamento real dos recursos envolvidos. Os ensaios serão realizados numa das unidades pertencentes a uma empresa transnacional de grande porte, que atua no ramo de pneumáticos. Por último, são apresentadas as conclusões gerais, recomendações na aplicação do sistema estudado e sugestões para futuras pesquisas relacionadas com o assunto.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O ponto de partida para este estudo foi o contraste entre a provável centralidade da decisão, defendida em boa parte da literatura administrativa, e a experiência de inclusão digital bem sucedida implantada na pequena cidade de Piraí-RJ. A originalidade deste programa e a sua maneira aparentemente caótica de encaminhamento foram inquietações que impulsionaram o retorno do pesquisador às teorias sobre o processo decisório e a formulação e implantação de políticas públicas. Neste trabalho busca-se contribuir com o debate sobre a decisão e o processo decisório a partir de duas questões de pesquisa: qual a centralidade dos processos decisórios para os resultados alcançados nas experiências consideradas inovadoras em municípios de pequeno porte populacional (abaixo de 30 mil habitantes)? E como caracterizar esses processos em relação à literatura acadêmica sobre a temática? O foco do trabalho é a área pública, na qual a temática da decisão tem forte intersecção com a temática da formulação e implementação das políticas públicas. Escolheu-se a análise dos fluxos de ações nas ações inovadoras ocorridas em municípios de pequeno porte populacional que correspondem a mais de 80% do total de municípios no Brasil. As limitações da pesquisa estão relacionadas à área pesquisada, ao tipo de município selecionado e aos projetos e programas analisados. As referências teóricas utilizadas neste trabalho são as seguintes: Escolha Racional, Racionalidade Limitada, Agenda de Políticas Públicas, Incrementalismo, Garbage Can, Sensemaking e, finalmente, Groping Along. Para efeitos da articulação entre essas teorias e os casos estudados, utilizou-se a lógica subjacente de cada teoria e a sua relação com o processo relatado pelos participantes das experiências selecionadas. O estudo foi desenvolvido em três fases. Na primeira, após as questões iniciais que emergiram da experiência de inclusão digital em Piraí, fez-se uma revisão bibliográfica da literatura relativa ao processo decisório. A partir disso, fez-se uma análise de documentos e relatos feitos por gestores e técnicos de 34 diferentes experiências inovadoras premiados pelo Programa Gestão Pública e Cidadania, entre 1996-2005, na qual se buscou captar os fluxos de ações que permearam os projetos e programas. Na terceira parte, procedeu-se uma pesquisa em profundidade em quatro estudos de caso nos estados da Bahia, Rio de Janeiro e São Paulo, utilizando-se técnicas da história oral e análise de documentos. Os casos estudados foram: Creche Noturna em Laranjal Paulista-SP, Desenvolvimento Local e Inclusão Digital em Piraí-RJ, Desenvolvimento Local Agroambiental em Almadina-BA e, finalmente, Manejo da Samambaia Silvestre em Ilha Comprida-SP. Os resultados obtidos contrastam com a literatura acadêmica cuja lógica subjacente é predominantemente linear e que defende a centralidade e a importância da decisão na obtenção de resultados maximizadores, e sugerem ainda que sejam as proposições dos processos decisórios menos lineares e os processos de formulação de políticas públicas que buscam captar as práticas cotidianas dos gestores permeada pelas imperfeições e deselegâncias do dia-a-dia - que melhor contribuem para sua compreensão.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We extend the macroeconomic literature on Sstype rules by introducing infrequent information in a kinked ad justment cost model. We first show that optimal individual decision rules are both state-and -time dependent. We then develop an aggregation framework to study the macroeconomic implications of such optimal individual decision rules. In our model, a vast number of agents act together, and more so when uncertainty is large.The average effect of an aggregate shock is inversely related to its size and to aggregate uncertainty. These results are in contrast with those obtained with full information ad justment cost models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho tem por objetivo estudar a tomada de decisão dos indivíduos de diferentes nacionalidades, que atuam na gestão de projetos organizacionais, em sua vida fora do âmbito profissional. Dado que as metodologias existentes na área de gestão de projetos atentam para a necessidade de um processo decisório racional, lógico e objetivo, este estudo pretende explorar até que ponto os sujeitos organizacionais extrapolam este mesmo processo decisório linear, advindo do mundo profissional, para o seu cotidiano. Os estudos acadêmicos ao longo dos anos trataram de discutir esta temática da decisão racional, linear e lógica, os quais foram capazes de refutar esta hipótese com novas perspectivas para o julgamento cognitivo dos humanos. Portanto, além deste trabalho apresentar o campo de estudo da gerência de projetos e seus conceitos, ele também aborda as diversas evoluções teóricas acerca da tomada de decisão ao longo do tempo. A partir da consideração do caráter subjetivo nas teorias de decisão apresentadas, e a limitação cognitiva que muitas vezes se impõe, este estudo busca então explorar as diferentes heurísticas (estratégias simplificadoras, atalhos mentais) de julgamento e seus respectivos vieses cognitivos. As três principais meta-heurísticas, expostas por Tversky e Kahneman em seu trabalho acadêmico de 1974 e também foco deste estudo são, respectivamente: da representatividade, da disponibilidade e da ancoragem e ajustamento. Neste trabalho é realizada uma pesquisa quantitativa com sujeitos organizacionais que trabalham com gestão de projetos, ou que tiveram alguma experiência em algum projeto nas empresas em que trabalham. Ressalta-se que este estudo não se limita ao Brasil, extendendo-se também a outros países com o mesmo público-alvo de pesquisa. Os resultados da pesquisa revelaram que os profissionais que atuam em gestão de projetos estão sujeitos a vieses cognitivos fora do âmbito organizacional, sendo que os brasileiros são os menos propensos a estes vieses, em comparação com as demais nacionalidades estudadas. Também revelou-se que o tempo de experiência profissional não contribui de modo significante para uma tomada de decisão mais racional e lógica no cotidiano pessoal.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study is presented an automatic method to classify images from fractal descriptors as decision rules, such as multiscale fractal dimension and lacunarity. The proposed methodology was divided in three steps: quantification of the regions of interest with fractal dimension and lacunarity, techniques under a multiscale approach; definition of reference patterns, which are the limits of each studied group; and, classification of each group, considering the combination of the reference patterns with signals maximization (an approach commonly considered in paraconsistent logic). The proposed method was used to classify histological prostatic images, aiming the diagnostic of prostate cancer. The accuracy levels were important, overcoming those obtained with Support Vector Machine (SVM) and Bestfirst Decicion Tree (BFTree) classifiers. The proposed approach allows recognize and classify patterns, offering the advantage of giving comprehensive results to the specialists.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends critically on active device models that are accurate, computationally efficient, and easily extracted from measurements or device simulators. Empirical models of active electron devices, which are based on actual device measurements, do not provide a detailed description of the electron device physics. However they are numerically efficient and quite accurate. These characteristics make them very suitable for MMIC design in the framework of commercially available CAD tools. In the empirical model formulation it is very important to separate linear memory effects (parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device model is generally described by an extrinsic linear part which accounts for the parasitic passive structures connecting the nonlinear intrinsic electron device to the external world. An important task circuit designers deal with is evaluating the ultimate potential of a device for specific applications. In fact once the technology has been selected, the designer would choose the best device for the particular application and the best device for the different blocks composing the overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices, good scalability properties of the model are necessarily required. Another important aspect of empirical modelling of electron devices is the mathematical (or equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device. Once the model has been defined, the proper measurements for the characterization of the device are performed in order to identify the model. Hence, the correct measurement of the device nonlinear characteristics (in the device characterization phase) and their reconstruction (in the identification or even simulation phase) are two of the more important aspects of empirical modelling. This thesis presents an original contribution to nonlinear electron device empirical modelling treating the issues of model scalability and reconstruction of the device nonlinear characteristics. The scalability of an empirical model strictly depends on the scalability of the linear extrinsic parasitic network, which should possibly maintain the link between technological process parameters and the corresponding device electrical response. Since lumped parasitic networks, together with simple linear scaling rules, cannot provide accurate scalable models, either complicate technology-dependent scaling rules or computationally inefficient distributed models are available in literature. This thesis shows how the above mentioned problems can be avoided through the use of commercially available electromagnetic (EM) simulators. They enable the actual device geometry and material stratification, as well as losses in the dielectrics and electrodes, to be taken into account for any given device structure and size, providing an accurate description of the parasitic effects which occur in the device passive structure. It is shown how the electron device behaviour can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed four-port passive parasitic network, which is identified by means of the EM simulation of the device layout, allowing for better frequency extrapolation and scalability properties than conventional empirical models. Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data approximation algorithm has been developed for the exploitation in the framework of empirical table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain signal reconstruction from a set of samples and the continuous approximation of device nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion, nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain, typical methods of the time-domain sampling theory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Minor brain injury is a frequent condition. Validated clinical decision rules can help in deciding whether a computed tomogram (CT) of the head is required. We hypothesized that institutional guidelines are not frequently used, and that psychological factors are a common reason for ordering an unnecessary CT.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Beef cow herd owners can benefit from incorporating price signals into their heifer retention decisions. Whereas a perfect forecast of calf prices over the productive life of the heifer added to the herd would be ideal, such information is not available. However, simple decision rules that incorporate current or recent prices and the knowledge that the cattle cycle likely will repeat itself can help producers improve their investment decisions. A dollar cost averaging strategy that retains the same dollar value of heifers each year and a rolling average value strategy that retains a 10-year average value of heifers out performed strategies that sought to maintain a constant herd size or a constant cash flow.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20% or 40% of patients in seven cohorts of patients starting ART in South Africa, and plotted cut-offs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia and the Asia-Pacific. FINDINGS 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African, from 64% to 93% in the Zambian and from 73% to 96% in the Asia-Pacific cohorts. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia and from 37% to 71% in Asia-Pacific. The area under the receiver-operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia and from 0.77 to 0.92 in Asia Pacific. INTERPRETATION CD4-based risk charts with optimal cut-offs for targeted VL testing may be useful to monitor ART in settings where VL capacity is limited.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20%, or 40% of patients in 7 cohorts of patients starting ART in South Africa, and plotted cutoffs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia, and the Asia-Pacific. RESULTS In total, 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African cohort, from 64% to 93% in the Zambian cohort, and from 73% to 96% in the Asia-Pacific cohort. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia, and from 37% to 71% in Asia-Pacific. The area under the receiver operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia, and from 0.77 to 0.92 in Asia-Pacific. CONCLUSIONS CD4-based risk charts with optimal cutoffs for targeted VL testing maybe useful to monitor ART in settings where VL capacity is limited.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE OF REVIEW Fever and neutropenia is the most common complication in the treatment of childhood cancer. This review will summarize recent publications that focus on improving the management of this condition as well as those that seek to optimize translational research efforts. RECENT FINDINGS A number of clinical decision rules are available to assist in the identification of low-risk fever and neutropenia however few have undergone external validation and formal impact analysis. Emerging evidence suggests acute fever and neutropenia management strategies should include time to antibiotic recommendations, and quality improvement initiatives have focused on eliminating barriers to early antibiotic administration. Despite reported increases in antimicrobial resistance, few studies have focused on the prediction, prevention, and optimal treatment of these infections and the effect on risk stratification remains unknown. A consensus guideline for paediatric fever and neutropenia research is now available and may help reduce some of the heterogeneity between studies that have previously limited the translation of evidence into clinical practice. SUMMARY Risk stratification is recommended for children with cancer and fever and neutropenia. Further research is required to quantify the overall impact of this approach and to refine exactly which children will benefit from early antibiotic administration as well as modifications to empiric regimens to cover antibiotic-resistant organisms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When conducting a randomized comparative clinical trial, ethical, scientific or economic considerations often motivate the use of interim decision rules after successive groups of patients have been treated. These decisions may pertain to the comparative efficacy or safety of the treatments under study, cost considerations, the desire to accelerate the drug evaluation process, or the likelihood of therapeutic benefit for future patients. At the time of each interim decision, an important question is whether patient enrollment should continue or be terminated; either due to a high probability that one treatment is superior to the other, or a low probability that the experimental treatment will ultimately prove to be superior. The use of frequentist group sequential decision rules has become routine in the conduct of phase III clinical trials. In this dissertation, we will present a new Bayesian decision-theoretic approach to the problem of designing a randomized group sequential clinical trial, focusing on two-arm trials with time-to-failure outcomes. Forward simulation is used to obtain optimal decision boundaries for each of a set of possible models. At each interim analysis, we use Bayesian model selection to adaptively choose the model having the largest posterior probability of being correct, and we then make the interim decision based on the boundaries that are optimal under the chosen model. We provide a simulation study to compare this method, which we call Bayesian Doubly Optimal Group Sequential (BDOGS), to corresponding frequentist designs using either O'Brien-Fleming (OF) or Pocock boundaries, as obtained from EaSt 2000. Our simulation results show that, over a wide variety of different cases, BDOGS either performs at least as well as both OF and Pocock, or on average provides a much smaller trial. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este artículo se investigan técnicas automáticas para encontrar un modelo óptimo de características en el caso de un analizador de dependencias basado en transiciones. Mostramos un estudio comparativo entre algoritmos de búsqueda, sistemas de validación y reglas de decisión demostrando al mismo tiempo que usando nuestros métodos es posible conseguir modelos complejos que proporcionan mejores resultados que los modelos que siguen configuraciones por defecto.