843 resultados para Predictable routing
Resumo:
GPS tracking of mobile objects provides spatial and temporal data for a broad range of applications including traffic management and control, transportation routing and planning. Previous transport research has focused on GPS tracking data as an appealing alternative to travel diaries. Moreover, the GPS based data are gradually becoming a cornerstone for real-time traffic management. Tracking data of vehicles from GPS devices are however susceptible to measurement errors – a neglected issue in transport research. By conducting a randomized experiment, we assess the reliability of GPS based traffic data on geographical position, velocity, and altitude for three types of vehicles; bike, car, and bus. We find the geographical positioning reliable, but with an error greater than postulated by the manufacturer and a non-negligible risk for aberrant positioning. Velocity is slightly underestimated, whereas altitude measurements are unreliable.
Resumo:
This paper elaborates the routing of cable cycle through available routes in a building in order to link a set of devices, in a most reasonable way. Despite of the similarities to other NP-hard routing problems, the only goal is not only to minimize the cost (length of the cycle) but also to increase the reliability of the path (in case of a cable cut) which is assessed by a risk factor. Since there is often a trade-off between the risk and length factors, a criterion for ranking candidates and deciding the most reasonable solution is defined. A set of techniques is proposed to perform an efficient and exact search among candidates. A novel graph is introduced to reduce the search-space, and navigate the search toward feasible and desirable solutions. Moreover, admissible heuristic length estimation helps to early detection of partial cycles which lead to unreasonable solutions. The results show that the method provides solutions which are both technically and financially reasonable. Furthermore, it is proved that the proposed techniques are very efficient in reducing the computational time of the search to a reasonable amount.
Resumo:
The current system of controlling oil spills involves a complex relationship of international, federal and state law, which has not proven to be very effective. The multiple layers of regulation often leave shipowners unsure of the laws facing them. Furthemore, nations have had difficulty enforcing these legal requirements. This thesis deals with the role marine insurance can play within the existing system of legislation to provide a strong preventative influence that is simple and cost-effective to enforce. In principle, insurance has two ways of enforcing higher safety standards and limiting the risk of an accident occurring. The first is through the use of insurance premiums that are based on the level of care taken by the insured. This means that a person engaging in riskier behavior faces a higher insurance premium, because their actions increase the probability of an accident occurring. The second method, available to the insurer, is collectively known as cancellation provisions or underwriting clauses. These are clauses written into an insurance contract that invalidates the agreement when certain conditions are not met by the insured The problem has been that obtaining information about the behavior of an insured party requires monitoring and that incurs a cost to the insurer. The application of these principles proves to be a more complicated matter. The modern marine insurance industry is a complicated system of multiple contracts, through different insurers, that covers the many facets of oil transportation. Their business practices have resulted in policy packages that cross the neat bounds of individual, specific insurance coverage. This paper shows that insurance can improve safety standards in three general areas -crew training, hull and equipment construction and maintenance, and routing schemes and exclusionary zones. With crew, hull and equipment, underwriting clauses can be used to ensure that minimum standards are met by the insured. Premiums can then be structured to reflect the additional care taken by the insured above and beyond these minimum standards. Routing schemes are traffic flow systems applied to congested waterways, such as the entrance to New York harbor. Using natural obstacles or manmade dividers, ships are separated into two lanes of opposing traffic, similar to a road. Exclusionary zones are marine areas designated off limits to tanker traffic either because of a sensitive ecosystem or because local knowledge is required of the region to ensure safe navigation. Underwriting clauses can be used to nullify an insurance contract when a tanker is not in compliance with established exclusionary zones or routing schemes.
Resumo:
The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.
Resumo:
The importance of small and medium enterprises for the economy of a country is fundamental because they have several strategic social and economic roles. Besides contributing to the production of national wealth, they also counterbalance the vulnerabilities of large companies providing the necessary economic balance. Socially their contribution is directly related to the lessening of unemployment, functioning also as source of stability in the community, as a means of reducing inequalities in the distribution of income among regions and economic groups, and contributes, decisively, to limit migration to urbans area. The capacity to innovate is now a key component for the survival and development of small organizations. The future today is increasingly less predictable using past parameters and the business world is more turbulent. The objective of this is to point out the need to revise the models which serve as examples for their adoption of competitive alternatives of development and to offer theoretical-practical knowledge to make possible the implementation of the innovative culture in small enterprises. It emphasizes, moreover, that in the present context, flexibility and skills to work in ambiguous situations and to find creative solutions become central concerns of businessmen and managers.
Resumo:
The evolution of integrated circuits technologies demands the development of new CAD tools. The traditional development of digital circuits at physical level is based in library of cells. These libraries of cells offer certain predictability of the electrical behavior of the design due to the previous characterization of the cells. Besides, different versions of each cell are required in such a way that delay and power consumption characteristics are taken into account, increasing the number of cells in a library. The automatic full custom layout generation is an alternative each time more important to cell based generation approaches. This strategy implements transistors and connections according patterns defined by algorithms. So, it is possible to implement any logic function avoiding the limitations of the library of cells. Tools of analysis and estimate must offer the predictability in automatic full custom layouts. These tools must be able to work with layout estimates and to generate information related to delay, power consumption and area occupation. This work includes the research of new methods of physical synthesis and the implementation of an automatic layout generation in which the cells are generated at the moment of the layout synthesis. The research investigates different strategies of elements disposition (transistors, contacts and connections) in a layout and their effects in the area occupation and circuit delay. The presented layout strategy applies delay optimization by the integration with a gate sizing technique. This is performed in such a way the folding method allows individual discrete sizing to transistors. The main characteristics of the proposed strategy are: power supply lines between rows, over the layout routing (channel routing is not used), circuit routing performed before layout generation and layout generation targeting delay reduction by the application of the sizing technique. The possibility to implement any logic function, without restrictions imposed by a library of cells, allows the circuit synthesis with optimization in the number of the transistors. This reduction in the number of transistors decreases the delay and power consumption, mainly the static power consumption in submicrometer circuits. Comparisons between the proposed strategy and other well-known methods are presented in such a way the proposed method is validated.
Resumo:
In this thesis, we present a novel approach to combine both reuse and prediction of dynamic sequences of instructions called Reuse through Speculation on Traces (RST). Our technique allows the dynamic identification of instruction traces that are redundant or predictable, and the reuse (speculative or not) of these traces. RST addresses the issue, present on Dynamic Trace Memoization (DTM), of traces not being reused because some of their inputs are not ready for the reuse test. These traces were measured to be 69% of all reusable traces in previous studies. One of the main advantages of RST over just combining a value prediction technique with an unrelated reuse technique is that RST does not require extra tables to store the values to be predicted. Applying reuse and value prediction in unrelated mechanisms but at the same time may require a prohibitive amount of storage in tables. In RST, the values are already stored in the Trace Memoization Table, and there is no extra cost in reading them if compared with a non-speculative trace reuse technique. . The input context of each trace (the input values of all instructions in the trace) already stores the values for the reuse test, which may also be used for prediction. Our main contributions include: (i) a speculative trace reuse framework that can be adapted to different processor architectures; (ii) specification of the modifications in a superscalar, superpipelined processor in order to implement our mechanism; (iii) study of implementation issues related to this architecture; (iv) study of the performance limits of our technique; (v) a performance study of a realistic, constrained implementation of RST; and (vi) simulation tools that can be used in other studies which represent a superscalar, superpipelined processor in detail. In a constrained architecture with realistic confidence, our RST technique is able to achieve average speedups (harmonic means) of 1.29 over the baseline architecture without reuse and 1.09 over a non-speculative trace reuse technique (DTM).
Resumo:
With the ever increasing demands for high complexity consumer electronic products, market pressures demand faster product development and lower cost. SoCbased design can provide the required design flexibility and speed by allowing the use of IP cores. However, testing costs in the SoC environment can reach a substantial percent of the total production cost. Analog testing costs may dominate the total test cost, as testing of analog circuits usually require functional verification of the circuit and special testing procedures. For RF analog circuits commonly used in wireless applications, testing is further complicated because of the high frequencies involved. In summary, reducing analog test cost is of major importance in the electronic industry today. BIST techniques for analog circuits, though potentially able to solve the analog test cost problem, have some limitations. Some techniques are circuit dependent, requiring reconfiguration of the circuit being tested, and are generally not usable in RF circuits. In the SoC environment, as processing and memory resources are available, they could be used in the test. However, the overhead for adding additional AD and DA converters may be too costly for most systems, and analog routing of signals may not be feasible and may introduce signal distortion. In this work a simple and low cost digitizer is used instead of an ADC in order to enable analog testing strategies to be implemented in a SoC environment. Thanks to the low analog area overhead of the converter, multiple analog test points can be observed and specific analog test strategies can be enabled. As the digitizer is always connected to the analog test point, it is not necessary to include muxes and switches that would degrade the signal path. For RF analog circuits, this is specially useful, as the circuit impedance is fixed and the influence of the digitizer can be accounted for in the design phase. Thanks to the simplicity of the converter, it is able to reach higher frequencies, and enables the implementation of low cost RF test strategies. The digitizer has been applied successfully in the testing of both low frequency and RF analog circuits. Also, as testing is based on frequency-domain characteristics, nonlinear characteristics like intermodulation products can also be evaluated. Specifically, practical results were obtained for prototyped base band filters and a 100MHz mixer. The application of the converter for noise figure evaluation was also addressed, and experimental results for low frequency amplifiers using conventional opamps were obtained. The proposed method is able to enhance the testability of current mixed-signal designs, being suitable for the SoC environment used in many industrial products nowadays.
Resumo:
A bronquiolite viral aguda (BVA) é uma doença respiratória que acomete crianças principalmente no primeiro ano de vida. O Vírus Sincicial Respiratório é responsável por aproximadamente 75% dos casos de bronquiolite viral aguda; entretanto, outros agentes também podem desencadear doença semelhante, como Adenovirus1, 7,3 e 21, Rinovírus, Parainfluenza, Influenza, Metapneumovirus e, menos freqüentemente, o Mycoplasma pneumoniae. A BVA é uma doença com padrão sazonal, de evolução benigna na maioria dos lactentes hígidos, entretanto 0,5% a 2% necessitam hospitalização, dos quais 15% necessitam cuidados intensivos, e destes apenas 3 a 8% desenvolvem falência ventilatória necessitando de ventilação mecânica. A mortalidade entre crianças previamente hígidas está em torno de 1% dos pacientes internados. O objetivo deste trabalho é identificar fatores de prognóstico na BVA e correlacionar com tempo de internação em lactentes previamente hígidos. Durante o inverno de 2002, foram acompanhados em estudo de coorte 219 pacientes menores de um ano de idade com diagnóstico clínico de bronquiolite viral aguda. Estes pacientes foram avaliados e classificados conforme escore clinico modificado (DE BOECK et al., 1997) na internação, no terceiro dia e no momento da alta hospitalar. O tempo de internação real foi registrado e foi estimado o tempo de internação ideal, conforme critérios de alta clínica definidos por Wainwright e cols. , em 2003, como não uso de oxigênio por mais de 10 horas, tiragem intercostal mínima ou ausente, sem uso de medicação parenteral e com capacidade de alimentação via oral. O escore clinico na internação foi 3,88±1, 81, o tempo médio de uso de oxigênio 5,3±3,83 dias. Estes pacientes apresentaram tempo de internação real de 7,02±3,89 dias e tempo de internação ideal de 5,92±3,83 dias (p<0,001). Considerando tempo de internação ideal como variável dependente em um modelo de regressão logística, observa-se que para cada ponto de aumento no escore clinico aumenta em 1,9 a chance de o paciente permanecer internado por mais de três dias. Conclui-se, então, que se pode predizer o tempo de internação de lactentes hígidos com BVA através do escore clínico, indicando seu uso na avaliação inicial destes pacientes.
Resumo:
Esse é um trabalho sobre estudos comportamentais que questionam a confiabilidade empírica das premissas de racionalidade das ciências sociais. Décadas de pesquisa comportamental vêm nos ensinando é que a grande maioria das tendências cognitivas identificadas e comprovadas, que se afastam dos pressupostos da Teoria da Escolha Racional não são de forma alguma aleatórias, mas ao invés, são sistemáticas e previsíveis. A ideia unificando esse trabalho é de que a literatura de pesquisa comportamental pode nos permitir modelar e prever comportamentos relevantes para o direito, com pressupostos mais realistas sobre o comportamento humano. No entanto, alguns pesquisadores pintam uma figura entusiástica sobre o potencial que tal pesquisa possui para informar a análise jurídica e, assim, cometem algumas desatenções ao defender generalizações não embasadas por evidências científicas, quase aproximando-se do uso de uma mera retórica. Dado esse cenário, devemos procurar garantir que a incorporação das evidências da pesquisa comportamental no discurso jurídico seja acompanhada de maior ênfase na pesquisa empírica em ambientes específicos. Esse trabalho possui três objetivos. O primeiro é analisar as diferentes concepções de racionalidade e caso elas devem manter sua posição privilegiada nas ciências sociais. O segundo é tentar entender melhor a literatura de pesquisa comportamental que questionam a validade empírica dos axiomas da Teoria da Escolha Racional. O terceiro é identificar problemas na forma pela qual a pesquisa comportamental tem sido incorporada no discurso jurídico.
Resumo:
Há um grau de incerteza que é próprio da atividade jurisdicional e não é possível de ser mitigado em razão da própria natureza dos juízos a respeito de normas jurídicas. Decisões judiciais não são e nem podem ser absolutamente previsíveis. Há, contudo, um grau de incerteza que é evitável e o deve ser evitado, por ser prejudicial à saúde de um sistema jurídico. Outros pesquisadores no Brasil trabalharam com esta noção, e foi muito bem sucedida a formulação dos conceitos de incerteza estrutural e incerteza patológica de Joaquim Falcão, Luís Fernando Schuartz e Diego Arguelhes. Contudo, acreditamos que a concepção de incerteza patológica apresentada dos autores precisa de reformulação, especialmente para que pudesse ser verificada a partir de elementos da decisão judicial e não apenas de elementos sociológicos e psicológicos. Propomos uma concepção de incerteza patológica calcada na qualidade da fundamentação das decisões judiciais e concluímos que o cultivo de uma cultura de precedentes é necessária no Brasil para mitigar os efeitos nocivos da incerteza patológica.
Resumo:
DISSERTAÇÃO DE MESTRADO - ESCOLA DE DIREITO DE SÃO PAULO DA FUNDAÇÃO GETULIO VARGAS.
Resumo:
Latin America has recently experienced three cycles of capital inflows, the first two ending in major financial crises. The first took place between 1973 and the 1982 ‘debt-crisis’. The second took place between the 1989 ‘Brady bonds’ agreement (and the beginning of the economic reforms and financial liberalisation that followed) and the Argentinian 2001/2002 crisis, and ended up with four major crises (as well as the 1997 one in East Asia) — Mexico (1994), Brazil (1999), and two in Argentina (1995 and 2001/2). Finally, the third inflow-cycle began in 2003 as soon as international financial markets felt reassured by the surprisingly neo-liberal orientation of President Lula’s government; this cycle intensified in 2004 with the beginning of a (purely speculative) commodity price-boom, and actually strengthened after a brief interlude following the 2008 global financial crash — and at the time of writing (mid-2011) this cycle is still unfolding, although already showing considerable signs of distress. The main aim of this paper is to analyse the financial crises resulting from this second cycle (both in LA and in East Asia) from the perspective of Keynesian/ Minskyian/ Kindlebergian financial economics. I will attempt to show that no matter how diversely these newly financially liberalised Developing Countries tried to deal with the absorption problem created by the subsequent surges of inflow (and they did follow different routes), they invariably ended up in a major crisis. As a result (and despite the insistence of mainstream analysis), these financial crises took place mostly due to factors that were intrinsic (or inherent) to the workings of over-liquid and under-regulated financial markets — and as such, they were both fully deserved and fairly predictable. Furthermore, these crises point not just to major market failures, but to a systemic market failure: evidence suggests that these crises were the spontaneous outcome of actions by utility-maximising agents, freely operating in friendly (‘light-touch’) regulated, over-liquid financial markets. That is, these crises are clear examples that financial markets can be driven by buyers who take little notice of underlying values — i.e., by investors who have incentives to interpret information in a biased fashion in a systematic way. Thus, ‘fat tails’ also occurred because under these circumstances there is a high likelihood of self-made disastrous events. In other words, markets are not always right — indeed, in the case of financial markets they can be seriously wrong as a whole. Also, as the recent collapse of ‘MF Global’ indicates, the capacity of ‘utility-maximising’ agents operating in (excessively) ‘friendly-regulated’ and over-liquid financial market to learn from previous mistakes seems rather limited.
Resumo:
Latin America has recently experienced three cycles of capital inflows, the first two ending in major financial crises. The first took place between 1973 and the 1982 ‘debt-crisis’. The second took place between the 1989 ‘Brady bonds’ agreement (and the beginning of the economic reforms and financial liberalisation that followed) and the Argentinian 2001/2002 crisis, and ended up with four major crises (as well as the 1997 one in East Asia) — Mexico (1994), Brazil (1999), and two in Argentina (1995 and 2001/2). Finally, the third inflow-cycle began in 2003 as soon as international financial markets felt reassured by the surprisingly neo-liberal orientation of President Lula’s government; this cycle intensified in 2004 with the beginning of a (purely speculative) commodity price-boom, and actually strengthened after a brief interlude following the 2008 global financial crash — and at the time of writing (mid-2011) this cycle is still unfolding, although already showing considerable signs of distress. The main aim of this paper is to analyse the financial crises resulting from this second cycle (both in LA and in East Asia) from the perspective of Keynesian/ Minskyian/ Kindlebergian financial economics. I will attempt to show that no matter how diversely these newly financially liberalised Developing Countries tried to deal with the absorption problem created by the subsequent surges of inflow (and they did follow different routes), they invariably ended up in a major crisis. As a result (and despite the insistence of mainstream analysis), these financial crises took place mostly due to factors that were intrinsic (or inherent) to the workings of over-liquid and under-regulated financial markets — and as such, they were both fully deserved and fairly predictable. Furthermore, these crises point not just to major market failures, but to a systemic market failure: evidence suggests that these crises were the spontaneous outcome of actions by utility-maximising agents, freely operating in friendly (light-touched) regulated, over-liquid financial markets. That is, these crises are clear examples that financial markets can be driven by buyers who take little notice of underlying values — investors have incentives to interpret information in a biased fashion in a systematic way. ‘Fat tails’ also occurred because under these circumstances there is a high likelihood of self-made disastrous events. In other words, markets are not always right — indeed, in the case of financial markets they can be seriously wrong as a whole. Also, as the recent collapse of ‘MF Global’ indicates, the capacity of ‘utility-maximising’ agents operating in unregulated and over-liquid financial market to learn from previous mistakes seems rather limited.