994 resultados para reasonable time
Resumo:
The basis of this thesis was to optimize heat pump that uses multiple heat sources to get competitive heating system for residential building when life cycle costs are considered. The objectives were to compile necessary information to calculate life cycle costs for heating system of residential building and start to compose of designing program for heat pump based heating systems. Examinations were made for the purchase energy need of residential building. Features of heat pump, considered refrigerant and potential heat sources were examined to find out heat production potential of heat pumps. Necessary information for life cycle cost calculation was also examined. Collected data was used in two case analyses to design selected heat production systems and calculate their life cycle costs. On the basis of case analyses heat pump based hybrid heat production systems are very competitive on life cycle cost comparison against district heating when residential building uses a lot of energy. New buildings use considerably less energy and achieved energy cost savings with heat pump systems may not be enough to cover the relatively high investment cost in reasonable time period compared to district heating system. The calculation method was found to require further development to at least include the cooling energy need of the building. Cooling demand will continue to grow in the future, which improves the heat pump based heat production systems competitiveness compared to other systems.
Resumo:
Avec les nouvelles technologies des réseaux optiques, une quantité de données de plus en plus grande peut être transportée par une seule longueur d'onde. Cette quantité peut atteindre jusqu’à 40 gigabits par seconde (Gbps). Les flots de données individuels quant à eux demandent beaucoup moins de bande passante. Le groupage de trafic est une technique qui permet l'utilisation efficace de la bande passante offerte par une longueur d'onde. Elle consiste à assembler plusieurs flots de données de bas débit en une seule entité de données qui peut être transporté sur une longueur d'onde. La technique demultiplexage en longueurs d'onde (Wavelength Division Multiplexing WDM) permet de transporter plusieurs longueurs d'onde sur une même fibre. L'utilisation des deux techniques : WDM et groupage de trafic, permet de transporter une quantité de données de l'ordre de terabits par seconde (Tbps) sur une même fibre optique. La protection du trafic dans les réseaux optiques devient alors une opération très vitale pour ces réseaux, puisqu'une seule panne peut perturber des milliers d'utilisateurs et engendre des pertes importantes jusqu'à plusieurs millions de dollars à l'opérateur et aux utilisateurs du réseau. La technique de protection consiste à réserver une capacité supplémentaire pour acheminer le trafic en cas de panne dans le réseau. Cette thèse porte sur l'étude des techniques de groupage et de protection du trafic en utilisant les p-cycles dans les réseaux optiques dans un contexte de trafic dynamique. La majorité des travaux existants considère un trafic statique où l'état du réseau ainsi que le trafic sont donnés au début et ne changent pas. En plus, la majorité de ces travaux utilise des heuristiques ou des méthodes ayant de la difficulté à résoudre des instances de grande taille. Dans le contexte de trafic dynamique, deux difficultés majeures s'ajoutent aux problèmes étudiés, à cause du changement continuel du trafic dans le réseau. La première est due au fait que la solution proposée à la période précédente, même si elle est optimisée, n'est plus nécessairement optimisée ou optimale pour la période courante, une nouvelle optimisation de la solution au problème est alors nécessaire. La deuxième difficulté est due au fait que la résolution du problème pour une période donnée est différente de sa résolution pour la période initiale à cause des connexions en cours dans le réseau qui ne doivent pas être trop dérangées à chaque période de temps. L'étude faite sur la technique de groupage de trafic dans un contexte de trafic dynamique consiste à proposer différents scénarios pour composer avec ce type de trafic, avec comme objectif la maximisation de la bande passante des connexions acceptées à chaque période de temps. Des formulations mathématiques des différents scénarios considérés pour le problème de groupage sont proposées. Les travaux que nous avons réalisés sur le problème de la protection considèrent deux types de p-cycles, ceux protégeant les liens (p-cycles de base) et les FIPP p-cycles (p-cycles protégeant les chemins). Ces travaux ont consisté d’abord en la proposition de différents scénarios pour gérer les p-cycles de protection dans un contexte de trafic dynamique. Ensuite, une étude sur la stabilité des p-cycles dans un contexte de trafic dynamique a été faite. Des formulations de différents scénarios ont été proposées et les méthodes de résolution utilisées permettent d’aborder des problèmes de plus grande taille que ceux présentés dans la littérature. Nous nous appuyons sur la méthode de génération de colonnes pour énumérer implicitement les cycles les plus prometteurs. Dans l'étude des p-cycles protégeant les chemins ou FIPP p-cycles, nous avons proposé des formulations pour le problème maître et le problème auxiliaire. Nous avons utilisé une méthode de décomposition hiérarchique du problème qui nous permet d'obtenir de meilleurs résultats dans un temps raisonnable. Comme pour les p-cycles de base, nous avons étudié la stabilité des FIPP p-cycles dans un contexte de trafic dynamique. Les travaux montrent que dépendamment du critère d'optimisation, les p-cycles de base (protégeant les liens) et les FIPP p-cycles (protégeant les chemins) peuvent être très stables.
Resumo:
In der vorliegenden Arbeit wurde gezeigt, wie mit Hilfe der atomaren Vielteilchenstörungstheorie totale Energien und auch Anregungsenergien von Atomen und Ionen berechnet werden können. Dabei war es zunächst erforderlich, die Störungsreihen mit Hilfe computeralgebraischer Methoden herzuleiten. Mit Hilfe des hierbei entwickelten Maple-Programmpaketes APEX wurde dies für geschlossenschalige Systeme und Systeme mit einem aktiven Elektron bzw. Loch bis zur vierten Ordnung durchgeführt, wobei die entsprechenden Terme aufgrund ihrer großen Anzahl hier nicht wiedergegeben werden konnten. Als nächster Schritt erfolgte die analytische Winkelreduktion unter Anwendung des Maple-Programmpaketes RACAH, was zu diesem Zwecke entsprechend angepasst und weiterentwickelt wurde. Erst hier wurde von der Kugelsymmetrie des atomaren Referenzzustandes Gebrauch gemacht. Eine erhebliche Vereinfachung der Störungsterme war die Folge. Der zweite Teil dieser Arbeit befasst sich mit der numerischen Auswertung der bisher rein analytisch behandelten Störungsreihen. Dazu wurde, aufbauend auf dem Fortran-Programmpaket Ratip, ein Dirac-Fock-Programm für geschlossenschalige Systeme entwickelt, welches auf der in Kapitel 3 dargestellen Matrix-Dirac-Fock-Methode beruht. Innerhalb dieser Umgebung war es nun möglich, die Störungsterme numerisch auszuwerten. Dabei zeigte sich schnell, dass dies nur dann in einem angemessenen Zeitrahmen stattfinden kann, wenn die entsprechenden Radialintegrale im Hauptspeicher des Computers gehalten werden. Wegen der sehr hohen Anzahl dieser Integrale stellte dies auch hohe Ansprüche an die verwendete Hardware. Das war auch insbesondere der Grund dafür, dass die Korrekturen dritter Ordnung nur teilweise und die vierter Ordnung gar nicht berechnet werden konnten. Schließlich wurden die Korrelationsenergien He-artiger Systeme sowie von Neon, Argon und Quecksilber berechnet und mit Literaturwerten verglichen. Außerdem wurden noch Li-artige Systeme, Natrium, Kalium und Thallium untersucht, wobei hier die niedrigsten Zustände des Valenzelektrons betrachtet wurden. Die Ionisierungsenergien der superschweren Elemente 113 und 119 bilden den Abschluss dieser Arbeit.
Resumo:
In computer graphics, global illumination algorithms take into account not only the light that comes directly from the sources, but also the light interreflections. This kind of algorithms produce very realistic images, but at a high computational cost, especially when dealing with complex environments. Parallel computation has been successfully applied to such algorithms in order to make it possible to compute highly-realistic images in a reasonable time. We introduce here a speculation-based parallel solution for a global illumination algorithm in the context of radiosity, in which we have taken advantage of the hierarchical nature of such an algorithm
Resumo:
The period, known to UK farmers and processors as the "spring flush", when the cows' diet changes from dry feed to spring pasture, has long been established as a time of change in milk properties and processing characteristics. Although it is believed to be a time when problems in processing are most likely to occur (e.g. milk that does not form clots or forms weak gels during cheesemaking), there is little evidence in the literature of detailed changes in milk composition and their impact on product manufacture. In this study, a range of physicochemical properties were analysed in milk collected from five commercial dairy herds before, during and after the spring flush period of 2006. In particular, total and ionic calcium contents of milk were studied in relation to other parameters including rennet clotting, acid gel properties, heat coagulation, alcohol stability, micelle size and zeta potential. Total divalent cations were significantly reduced from 35.4 to 33.4 mmol.L-1 during the study, while ionic calcium was reduced from 1.48 to 1.40 mmol.L-1 over the same period. Many parameters varied significantly between the sample dates. However, there was no evidence to suggest that any of the milk samples would have been unsuitable for processing - e.g. there were no samples that did not form clots with chymosin within a reasonable time or formed especially weak rennet or acid gels. A number of statistically significant correlations were found within the data, including ionic calcium concentration and pH; rennet clotting time (RCT) and micelle diameter; and RCT and ethanol stability. Overall, while there were clear variations in milk composition and properties over this period, there was no evidence to support the view that serious processing problems are likely during the change from dry feed to spring pasture.
Resumo:
Classical measures of network connectivity are the number of disjoint paths between a pair of nodes and the size of a minimum cut. For standard graphs, these measures can be computed efficiently using network flow techniques. However, in the Internet on the level of autonomous systems (ASs), referred to as AS-level Internet, routing policies impose restrictions on the paths that traffic can take in the network. These restrictions can be captured by the valley-free path model, which assumes a special directed graph model in which edge types represent relationships between ASs. We consider the adaptation of the classical connectivity measures to the valley-free path model, where it is -hard to compute them. Our first main contribution consists of presenting algorithms for the computation of disjoint paths, and minimum cuts, in the valley-free path model. These algorithms are useful for ASs that want to evaluate different options for selecting upstream providers to improve the robustness of their connection to the Internet. Our second main contribution is an experimental evaluation of our algorithms on four types of directed graph models of the AS-level Internet produced by different inference algorithms. Most importantly, the evaluation shows that our algorithms are able to compute optimal solutions to instances of realistic size of the connectivity problems in the valley-free path model in reasonable time. Furthermore, our experimental results provide information about the characteristics of the directed graph models of the AS-level Internet produced by different inference algorithms. It turns out that (i) we can quantify the difference between the undirected AS-level topology and the directed graph models with respect to fundamental connectivity measures, and (ii) the different inference algorithms yield topologies that are similar with respect to connectivity and are different with respect to the types of paths that exist between pairs of ASs.
Resumo:
In a world where massive amounts of data are recorded on a large scale we need data mining technologies to gain knowledge from the data in a reasonable time. The Top Down Induction of Decision Trees (TDIDT) algorithm is a very widely used technology to predict the classification of newly recorded data. However alternative technologies have been derived that often produce better rules but do not scale well on large datasets. Such an alternative to TDIDT is the PrismTCS algorithm. PrismTCS performs particularly well on noisy data but does not scale well on large datasets. In this paper we introduce Prism and investigate its scaling behaviour. We describe how we improved the scalability of the serial version of Prism and investigate its limitations. We then describe our work to overcome these limitations by developing a framework to parallelise algorithms of the Prism family and similar algorithms. We also present the scale up results of a first prototype implementation.
Resumo:
Background. Through a national policy agreement, over 167 million Euros will be invested in the Swedish National Quality Registries (NQRs) between 2012 and 2016. One of the policy agreement¿s intentions is to increase the use of NQR data for quality improvement (QI). However, the evidence is fragmented as to how the use of medical registries and the like lead to quality improvement, and little is known about non-clinical use. The aim was therefore to investigate the perspectives of Swedish politicians and administrators on quality improvement based on national registry data. Methods. Politicians and administrators from four county councils were interviewed. A qualitative content analysis guided by the Consolidated Framework for Implementation Research (CFIR) was performed. Results. The politicians and administrators perspectives on the use of NQR data for quality improvement were mainly assigned to three of the five CFIR domains. In the domain of intervention characteristics, data reliability and access in reasonable time were not considered entirely satisfactory, making it difficult for the politico-administrative leaderships to initiate, monitor, and support timely QI efforts. Still, politicians and administrators trusted the idea of using the NQRs as a base for quality improvement. In the domain of inner setting, the organizational structures were not sufficiently developed to utilize the advantages of the NQRs, and readiness for implementation appeared to be inadequate for two reasons. Firstly, the resources for data analysis and quality improvement were not considered sufficient at politico-administrative or clinical level. Secondly, deficiencies in leadership engagement at multiple levels were described and there was a lack of consensus on the politicians¿ role and level of involvement. Regarding the domain of outer setting, there was a lack of communication and cooperation between the county councils and the national NQR organizations. Conclusions. The Swedish experiences show that a government-supported national system of well-funded, well-managed, and reputable national quality registries needs favorable local politico-administrative conditions to be used for quality improvement; such conditions are not yet in place according to local politicians and administrators.
Resumo:
Model Predictive Control (MPC) is a control method that solves in real time an optimal control problem over a finite horizon. The finiteness of the horizon is both the reason of MPC's success and its main limitation. In operational water resources management, MPC has been in fact successfully employed for controlling systems with a relatively short memory, such as canals, where the horizon length is not an issue. For reservoirs, which have generally a longer memory, MPC applications are presently limited to short term management only. Short term reservoir management can be effectively used to deal with fast process, such as floods, but it is not capable of looking sufficiently ahead to handle long term issues, such as drought. To overcome this limitation, we propose an Infinite Horizon MPC (IH-MPC) solution that is particularly suitable for reservoir management. We propose to structure the input signal by use of orthogonal basis functions, therefore reducing the optimization argument to a finite number of variables, and making the control problem solvable in a reasonable time. We applied this solution for the management of the Manantali Reservoir. Manantali is a yearly reservoir located in Mali, on the Senegal river, affecting water systems of Mali, Senegal, and Mauritania. The long term horizon offered by IH-MPC is necessary to deal with the strongly seasonal climate of the region.
Resumo:
Por mais que a informatização esteja avançada (interligação por melO da rede internet de computadores entre os órgãos e entidades públicas pelo Estado), máquina alguma substituirá os dramas do homem contemporâneo, principalmente aqueles que sempre estiveram alijados da cidadania. Os problemas do homem, principalmente os hiposuficientes, dentro do caso concreto, nem sempre podem ser solucionados por máquinas distantes uma da outra e, o pior, em locais distantes, sem permitir o acesso direto ao ser humano que comanda a máquina. Esse cidadão, que tem no princípio da dignidade da pessoa humana sua maior proteção e garantia outorgada pelo Estado de Direito, tem o direito de ser tratado com dignidade pelo Estado que monopoliza a pacificação social através da jurisdição, principalmente quando o réu é o próprio Estado, como é o caso do subsistema dos Juizados Especiais Federais (os réus são a União, ou suas entidades autárquicas, ou empresas públicas federais). A humanização no atendimento do cidadão, que busca e deposita no Estado Judiciário (Federal), no subsistema do Juizado Especial Federal, sua última esperança na resposta de seus direitos violados pelo próprio Estado Administração (Federal), se materializará por uma nova proposta de prestação de serviço público - a unitariedade (concentração de todos os partícipes desse subsistema em um único local - Judiciário e Executivo juntos) - de forma permanente e estática, nas cidades de maior demanda social, pela gestão associada de prestação de serviço público jurisdicional entre o Judiciário e Executivo (Legislativo eventualmente) onde a entrega do bem da vida litigioso ou a pacificação (meios alternativos de solução do conflito, como a conciliação) se dê dentro de um ambiente de respeito ao ser humano, ou seja, dentro de um prazo razoável, com padrões de atendimento de eficiência compatíveis com a contemporaneidade e principalmente de forma efetiva (com efetividade plena). Os Juizados Especiais, que foram criados para serem rápidos, ágeis e efetivos, não podem se banalizar e terem os mesmos estigmas da morosidade, da não efetividade e do desapego a qualidade no atendimento ao usuário. Tal humanização, como proposta na dissertação, desse subsistema judiciário - Juizado Especial Federal - com a unitariedade desse serviço público, atende e concretiza os valores e princípios constitucionais, sem necessidade de mudança legislativa, além de reforçar a legitimidade do Estado e solidificar a cidadania. O que se quer nessa dissertação é retirar o Judiciário do isolamento, o que é fundamental sobretudo no plano da efetividade (execução de suas decisões e prevenção e postergação de litígios criando uma mecanismos de conciliação prévia permanente). A dissertação propõe um novo desenho institucional entre Poderes da República para prestação de serviço público jurisdicional buscando contribuir para o aperfeiçoamento das atividades judiciárias em sentido amplo, ou seja, atividades administrativas ou não jurisdicionais (função atípica do Poder Judiciário). O paradigma proposto, além da valorização do consensualismo, implica a efetividade das normas jurídicas e a eficiência do sistema.
Resumo:
O modelo de gestão inovador mostrou que é possível fazer um ambiente de excelência onde o Poder Judiciário seja reconhecido, respeitado e confiável aos jurisdicionados, na medida em que se assegura uma prestação jurisdicional efetiva num espaço de tempo razoável, garantindo legitimidade e credibilidade às suas decisões, sob a visão de um juiz proativo, com objetivos estratégicos pré-definidos, sob um olhar idealizador, uma equipe integrada, motivada e comprometida. O modelo de gestão inovador foi experimentado na Vara do Juizado Especial Cível da Comarca de Jaru, no Estado de Rondônia, onde se procurou conferir uma rotina lógico-jurídica ao fluxo processual, sem prejuízo da qualidade, e em com total harmonia aos postulados normativos do Juizado Especial Cível e as regras constitucionais prescritas.
Resumo:
Este estudo objetiva analisar os processos de recuperação judicial iniciados, desde a vigência da Lei de Recuperação de Empresas (fevereiro de 2005) até 31/06/2011 nas varas empresarias da comarca da capital do Tribunal de Justiça do Estado do Rio de Janeiro. Além da aferição do tempo médio de cada uma das etapas previstas na Lei de Recuperação de Empresas (deferimento do processamento da recuperação judicial, concessão da recuperação judicial e encerramento do processo após cumprimento de todas as obrigações previstas no plano que se vencerem até dois anos depois da concessão da recuperação judicial), busco também verificar se, de fato, alguma sociedade requerente conseguiu se recuperar. Para tanto, considerarei recuperada a sociedade que, após o encerramento do processo, estiver cumprindo plenamente o seu plano de recuperação, sem que tenha havido qualquer requerimento posterior de falência. Considerando que a Lei de Recuperação de Empresas já está no seu sétimo ano de vigência, bem como o fato de o legislador ter idealizado o processo para que dure no máximo 3 anos, entendo não haver óbices à adoção do conceito supra, tendo em vista que já haver tempo suficiente para o início e encerramento desse tipo de processo. Diante disso, o presente estudo observou que o tempo médio para cumprimento das etapas ultrapassa o limite do razoável, bem como que nenhuma sociedade conseguiu se recuperar até o desfecho da pesquisa, havendo casos, inclusive, de convolação da recuperação judicial em falência.
Resumo:
This dissertation aims to address the limits and possibilities of realizing the fundamental right to reasonable time of the Brazilian legal system process. From this perspective, we analyze a reasonable time concept for the process, consistent with the civil homeland process; the relationship between efficiency, effectiveness, legal security and reasonable time of adjudication; a formal recognition of the fundamental right to reasonable time of the procedure in the Constitution of 1988; and the immediate applicability of this fundamental right. As indicated, the crisis of the Judiciary and procedural delay are problems directly related to the limits and possibilities of realization of the fundamental right under study. Moreover, we also present some mechanisms that can be used to overcome these problems. The subject was developed based on constitutional interpretation of fundamental rights, an approach that will always have this concern to be based on a methodology which includes the normative and empirical-dogmatic fields, realizing the fundamental right to reasonable time of the process. We adopted as methodological approach the study of this issue in judicial aspect, more specifically in the field of civil procedure. Finally, we weave through a critical and analytical view, our conclusions, which demonstrate the possibilities of overcoming the limits imposed to immediate implementation of the fundamental right to reasonable time of the process in our legal system
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)