949 resultados para Time constraints


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Full text: The title of the book gives us a major clue on the innovative approach developed by Anne Freadman in her analysis of a particular Colette corpus, the one devoted to auto-biographical writing: Les Vrilles de la vigne, Mes apprentissages, La Maison de Claudine, Sido ,L’E ́toile Vesper and Le Fanal bleu. Freadman follows the powerful lure of Rimbaldianvieilles vieilleries and its echoes with Colette’s fondness for collecting objects, people and memories. To this must be added a technical aspect, that of the study of the genre of Colette’s writing. Freadman argues that, by largely avoiding the autobiographical form, the writer achieves a new way of ‘telling time’, collecting anecdotes and detail taken from the quotidian and setting them within an all-encompassing preoccupation with time. This provides the second part of the title.The sonata form directs the sequence of the book, orchestrated into five parts,from ‘exposition’ to ‘first subject’ to‘bridge’ to ‘second subject’ to ‘recapitulation’. This has the advantage of enabling Freadman to move and progress between distinct themes—autobiography first,then alternative forms—with grace,whilst preserving within her own writing what she sees as the essence of Colette’s relationship to time in her ‘Livres-Souvenirs’, the telling of time. This‘telling of time’ is itself therefore cleverly subjected to the time constraints and freedoms of musical composition. Freadman’s ‘Exposition’ takes us through a discussion of the autobiographical genre, analysing the texts against anumber of theorists, from Lejeune to Benjamin and Ricoeur, before launching into ‘Colette and Autobiography’. It argues pertinently that Colette did not write a ‘sustained’ autobiography, even inthe most autobiographical of her writings, Mes apprentissages. Measured against Goodwin’s three sources for autobiography, confession, apologia and memoirs, Colette’s autobiographical writings appear to be at odds with all of them. Freadman then goes on in Part II of her argument, to persuasively uncover a project that rejects self-scrutiny and with no autobiographical strategy. In ‘Collecting Time’, despite claims of continuity, narrative logic and causality areabandoned in favour of a collection offragments, family stories that are built up generation after generation into familylegends. A close and fruitful analysis of Sidoleads us to a study of ‘The Art of Ending’, concentrating on L’E ́toile Vesperandle Fanal Bleu. The closing chapter gives a fascinating reading of La Naissance du jouras an exemplar of the way in which the two subjects developed in Freadman’s volume are cast together:Colette’s own working through the autobiographical genre, and her refusal to write memoirs, in favour of collecting memories, and the strategies she uses for her purpose. In ‘Recapitulation’, her concluding chapter, Freadman adroitlyen capsulates her analysis in a fetching title: ‘Fables of Time’. Indeed, the wholepremise of her book is to move away from autobiographical genre, having acknowledged the links and debt the corpus owes to it, and into a study of the multiple and fruitful ways in which Colette tells time.The rich and varied readings of thematerial, competently informed by theoretical input, together with acute sensitivity to the corpus, mark out this study as incontournable for Colette scholars.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background. The importance of general practice involvement in the care of attention-deficit/hyperactivity disorder (ADHD) is increasing due to the rising numbers of patients who present with the disorder. It has been suggested by consensus bodies that GPs should be identifying and referring patients at the severe end of the ADHD spectrum and managing those with less severe symptoms. However, GPs' views of their role in ADHD care are unknown. Objective. Our aim was to explore the attitudes and practices of Australian GPs towards the diagnosis and management of ADHD. Methods. We conducted a series of focus groups to explore GPs' beliefs regarding the causes of ADHD, their perceived role in ADHD diagnosis and management and their views on the role of behaviour therapies and pharmacotherapies in ADHD management. The subjects were 28 GPs in six focus groups. Results. GPs in this study did not want to be the primary providers of care for patients with ADHD. Participants indicated a preference to refer the patient to medical specialists for diagnosis and treatment of ADHD, and expressed low levels of interest in becoming highly involved in ADHD care. Concerns about overdiagnosis and misdiagnosis of the disorder, diagnostic complexity, time constraints, insufficient education and training about the disorder, and concerns regarding misuse and diversion of stimulant medications were the reasons cited for their lack of willingness. Conclusions. The Australian GPs in this study identify a role for themselves in ADHD care which is largely supportive in nature, and involves close liaison with specialist services.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Variables influencing decision-making in real settings, as in the case of voting decisions, are uncontrollable and in many times even unknown to the experimenter. In this case, the experimenter has to study the intention to decide (vote) as close as possible in time to the moment of the real decision (election day). Here, we investigated the brain activity associated with the voting intention declared 1 week before the election day of the Brazilian Firearms Control Referendum about prohibiting the commerce of firearms. Two alliances arose in the Congress to run the campaigns for YES (for the prohibition of firearm commerce) and NO (against the prohibition of firearm commerce) voting. Time constraints imposed by the necessity of studying a reasonable number (here, 32) of voters during a very short time (5 days) made the EEG the tool of choice for recording the brain activity associated with voting decision. Recent fMRI and EEG studies have shown decision-making as a process due to the enrollment of defined neuronal networks. In this work, a special EEG technique is applied to study the topology of the voting decision-making networks and is compared to the results of standard ERP procedures. The results show that voting decision-making enrolled networks in charge of calculating the benefits and risks of the decision of prohibiting or allowing firearm commerce and that the topology of such networks was vote-(i.e., YES/NO-) sensitive. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Laparoscopy is a surgical procedure on which operations in the abdomen are performed through small incisions using several specialized instruments. The laparoscopic surgery success greatly depends on surgeon skills and training. To achieve these technical high-standards, different apprenticeship methods have been developed, many based on in vivo training, an approach that involves high costs and complex setup procedures. This paper explores Virtual Reality (VR) simulation as an alternative for novice surgeons training. Even though several simulators are available on the market claiming successful training experiences, their use is extremely limited due to the economic costs involved. In this work, we present a low-cost laparoscopy simulator able to monitor and assist the trainee’s surgical movements. The developed prototype consists of a set of inexpensive sensors, namely an accelerometer, a gyroscope, a magnetometer and a flex sensor, attached to specific laparoscopic instruments. Our approach allows repeated assisted training of an exercise, without time constraints or additional costs, since no human artificial model is needed. A case study of our simulator applied to instrument manipulation practice (hand-eye coordination) is also presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Neste trabalho propus-me realizar um Sistema de Aquisição de Dados em Tempo Real via Porta Paralela. Para atingir com sucesso este objectivo, foi realizado um levantamento bibliográfico sobre sistemas operativos de tempo real, salientando e exemplificando quais foram marcos mais importantes ao longo da sua evolução. Este levantamento permitiu perceber o porquê da proliferação destes sistemas face aos custos que envolvem, em função da sua aplicação, bem como as dificuldades, científicas e tecnológicas, que os investigadores foram tendo, e que foram ultrapassando com sucesso. Para que Linux se comporte como um sistema de tempo real, é necessário configura-lo e adicionar um patch, como por exemplo o RTAI ou ADEOS. Como existem vários tipos de soluções que permitem aplicar as características inerentes aos sistemas de tempo real ao Linux, foi realizado um estudo, acompanhado de exemplos, sobre o tipo de arquitecturas de kernel mais utilizadas para o fazer. Nos sistemas operativos de tempo real existem determinados serviços, funcionalidades e restrições que os distinguem dos sistemas operativos de uso comum. Tendo em conta o objectivo do trabalho, e apoiado em exemplos, fizemos um pequeno estudo onde descrevemos, entre outros, o funcionamento escalonador, e os conceitos de latência e tempo de resposta. Mostramos que há apenas dois tipos de sistemas de tempo real o ‘hard’ que tem restrições temporais rígidas e o ‘soft’ que engloba as restrições temporais firmes e suaves. As tarefas foram classificadas em função dos tipos de eventos que as despoletam, e evidenciando as suas principais características. O sistema de tempo real eleito para criar o sistema de aquisição de dados via porta paralela foi o RTAI/Linux. Para melhor percebermos o seu comportamento, estudamos os serviços e funções do RTAI. Foi dada especial atenção, aos serviços de comunicação entre tarefas e processos (memória partilhada e FIFOs), aos serviços de escalonamento (tipos de escalonadores e tarefas) e atendimento de interrupções (serviço de rotina de interrupção - ISR). O estudo destes serviços levou às opções tomadas quanto ao método de comunicação entre tarefas e serviços, bem como ao tipo de tarefa a utilizar (esporádica ou periódica). Como neste trabalho, o meio físico de comunicação entre o meio ambiente externo e o hardware utilizado é a porta paralela, também tivemos necessidade de perceber como funciona este interface. Nomeadamente os registos de configuração da porta paralela. Assim, foi possível configura-lo ao nível de hardware (BIOS) e software (módulo do kernel) atendendo aos objectivos do presente trabalho, e optimizando a utilização da porta paralela, nomeadamente, aumentando o número de bits disponíveis para a leitura de dados. No desenvolvimento da tarefa de hard real-time, foram tidas em atenção as várias considerações atrás referenciadas. Foi desenvolvida uma tarefa do tipo esporádica, pois era pretendido, ler dados pela porta paralela apenas quando houvesse necessidade (interrupção), ou seja, quando houvesse dados disponíveis para ler. Desenvolvemos também uma aplicação para permitir visualizar os dados recolhidos via porta paralela. A comunicação entre a tarefa e a aplicação é assegurada através de memória partilhada, pois garantindo a consistência de dados, a comunicação entre processos do Linux e as tarefas de tempo real (RTAI) que correm ao nível do kernel torna-se muito simples. Para puder avaliar o desempenho do sistema desenvolvido, foi criada uma tarefa de soft real-time cujos tempos de resposta foram comparados com os da tarefa de hard real-time. As respostas temporais obtidas através do analisador lógico em conjunto com gráficos elaborados a partir destes dados, mostram e comprovam, os benefícios do sistema de aquisição de dados em tempo real via porta paralela, usando uma tarefa de hard real-time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

International Conference with Peer Review 2012 IEEE International Conference in Geoscience and Remote Sensing Symposium (IGARSS), 22-27 July 2012, Munich, Germany

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many-core platforms based on Network-on-Chip (NoC [Benini and De Micheli 2002]) present an emerging technology in the real-time embedded domain. Although the idea to group the applications previously executed on separated single-core devices, and accommodate them on an individual many-core chip offers various options for power savings, cost reductions and contributes to the overall system flexibility, its implementation is a non-trivial task. In this paper we address the issue of application mapping onto a NoCbased many-core platform when considering fundamentals and trends of current many-core operating systems, specifically, we elaborate on a limited migrative application model encompassing a message-passing paradigm as a communication primitive. As the main contribution, we formulate the problem of real-time application mapping, and propose a three-stage process to efficiently solve it. Through analysis it is assured that derived solutions guarantee the fulfilment of posed time constraints regarding worst-case communication latencies, and at the same time provide an environment to perform load balancing for e.g. thermal, energy, fault tolerance or performance reasons.We also propose several constraints regarding the topological structure of the application mapping, as well as the inter- and intra-application communication patterns, which efficiently solve the issues of pessimism and/or intractability when performing the analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

When the Internet was born, the purpose was to interconnect computers to share digital data at large-scale. On the other hand, when embedded systems were born, the objective was to control system components under real-time constraints through sensing devices, typically at small to medium scales. With the great evolution of the Information and Communication Technology (ICT), the tendency is to enable ubiquitous and pervasive computing to control everything (physical processes and physical objects) anytime and at a large-scale. This new vision gave recently rise to the paradigm of Cyber-Physical Systems (CPS). In this position paper, we provide a realistic vision to the concept of the Cyber-Physical Internet (CPI), discuss its design requirements and present the limitations of the current networking abstractions to fulfill these requirements. We also debate whether it is more productive to adopt a system integration approach or a radical design approach for building large-scale CPS. Finally, we present a sample of realtime challenges that must be considered in the design of the Cyber-Physical Internet.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In future power systems, in the smart grid and microgrids operation paradigms, consumers can be seen as an energy resource with decentralized and autonomous decisions in the energy management. It is expected that each consumer will manage not only the loads, but also small generation units, heating systems, storage systems, and electric vehicles. Each consumer can participate in different demand response events promoted by system operators or aggregation entities. This paper proposes an innovative method to manage the appliances on a house during a demand response event. The main contribution of this work is to include time constraints in resources management, and the context evaluation in order to ensure the required comfort levels. The dynamic resources management methodology allows a better resources’ management in a demand response event, mainly the ones of long duration, by changing the priorities of loads during the event. A case study with two scenarios is presented considering a demand response with 30 min duration, and another with 240 min (4 h). In both simulations, the demand response event proposes the power consumption reduction during the event. A total of 18 loads are used, including real and virtual ones, controlled by the presented house management system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Energy consumption is one of the major issues for modern embedded systems. Early, power saving approaches mainly focused on dynamic power dissipation, while neglecting the static (leakage) energy consumption. However, technology improvements resulted in a case where static power dissipation increasingly dominates. Addressing this issue, hardware vendors have equipped modern processors with several sleep states. We propose a set of leakage-aware energy management approaches that reduce the energy consumption of embedded real-time systems while respecting the real-time constraints. Our algorithms are based on the race-to-halt strategy that tends to run the system at top speed with an aim to create long idle intervals, which are used to deploy a sleep state. The effectiveness of our algorithms is illustrated with an extensive set of simulations that show an improvement of up to 8% reduction in energy consumption over existing work at high utilization. The complexity of our algorithms is smaller when compared to state-of-the-art algorithms. We also eliminate assumptions made in the related work that restrict the practical application of the respective algorithms. Moreover, a novel study about the relation between the use of sleep intervals and the number of pre-emptions is also presented utilizing a large set of simulation results, where our algorithms reduce the experienced number of pre-emptions in all cases. Our results show that sleep states in general can save up to 30% of the overall number of pre-emptions when compared to the sleep-agnostic earliest-deadline-first algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The application of compressive sensing (CS) to hyperspectral images is an active area of research over the past few years, both in terms of the hardware and the signal processing algorithms. However, CS algorithms can be computationally very expensive due to the extremely large volumes of data collected by imaging spectrometers, a fact that compromises their use in applications under real-time constraints. This paper proposes four efficient implementations of hyperspectral coded aperture (HYCA) for CS, two of them termed P-HYCA and P-HYCA-FAST and two additional implementations for its constrained version (CHYCA), termed P-CHYCA and P-CHYCA-FAST on commodity graphics processing units (GPUs). HYCA algorithm exploits the high correlation existing among the spectral bands of the hyperspectral data sets and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. The proposed P-HYCA and P-CHYCA implementations have been developed using the compute unified device architecture (CUDA) and the cuFFT library. Moreover, this library has been replaced by a fast iterative method in the P-HYCA-FAST and P-CHYCA-FAST implementations that leads to very significant speedup factors in order to achieve real-time requirements. The proposed algorithms are evaluated not only in terms of reconstruction error for different compressions ratios but also in terms of computational performance using two different GPU architectures by NVIDIA: 1) GeForce GTX 590; and 2) GeForce GTX TITAN. Experiments are conducted using both simulated and real data revealing considerable acceleration factors and obtaining good results in the task of compressing remotely sensed hyperspectral data sets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The smart grid concept is a key issue in the future power systems, namely at the distribution level, with deep concerns in the operation and planning of these systems. Several advantages and benefits for both technical and economic operation of the power system and of the electricity markets are recognized. The increasing integration of demand response and distributed generation resources, all of them mostly with small scale distributed characteristics, leads to the need of aggregating entities such as Virtual Power Players. The operation business models become more complex in the context of smart grid operation. Computational intelligence methods can be used to give a suitable solution for the resources scheduling problem considering the time constraints. This paper proposes a methodology for a joint dispatch of demand response and distributed generation to provide energy and reserve by a virtual power player that operates a distribution network. The optimal schedule minimizes the operation costs and it is obtained using a particle swarm optimization approach, which is compared with a deterministic approach used as reference methodology. The proposed method is applied to a 33-bus distribution network with 32 medium voltage consumers and 66 distributed generation units.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tese de Doutoramento Plano Doutoral em Engenharia Eletrónica e de Computadores.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Inbreeding avoidance is predicted to induce sex biases in dispersal. But which sex should disperse? In polygynous species, females pay higher costs to inbreeding and thus might be expected to disperse more, but empirical evidence consistently reveals male biases. Here, we show that theoretical expectations change drastically if females are allowed to avoid inbreeding via kin recognition. At high inbreeding loads, females should prefer immigrants over residents, thereby boosting male dispersal. At lower inbreeding loads, by contrast, inclusive fitness benefits should induce females to prefer relatives, thereby promoting male philopatry. This result points to disruptive effects of sexual selection. The inbreeding load that females are ready to accept is surprisingly high. In absence of search costs, females should prefer related partners as long as delta<r/(1+r) where r is relatedness and delta is the fecundity loss relative to an outbred mating. This amounts to fitness losses up to one-fifth for a half-sib mating and one-third for a full-sib mating, which lie in the upper range of inbreeding depression values currently reported in natural populations. The observation of active inbreeding avoidance in a polygynous species thus suggests that inbreeding depression exceeds this threshold in the species under scrutiny or that inbred matings at least partly forfeit other mating opportunities for males. Our model also shows that female choosiness should decline rapidly with search costs, stemming from, for example, reproductive delays. Species under strong time constraints on reproduction should thus be tolerant of inbreeding.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fission-track and (40)Ar/(39)Ar ages place time constraints on the exhumation of the North Himalayan nappe stack, the Indus Suture Zone and Molasse, and the Transhimalayan Batholith in eastern Ladakh (NW India). Results from this and previous studies on a north-south transect passing near Tso Morari Lake suggest that the SW-directed North Himalayan nappe stack (comprising the Mata, Tetraogal and Tso Morari nappes) was emplaced and metamorphosed by c. 50-45 Ma, and exhumed to moderately shallow depths (c. 10 km) by c. 45-40 Ma. From the mid-Eocene to the present, exhumation continued at a steady and slow rate except for the root zone of the Tso Morari nappe, which cooled faster than the rest of the nappe stack. Rapid cooling occurred at c. 20 Ma and is linked to brittle deformation along the normal Ribil-Zildat Fault concomitant with extrusion of the Crystalline nappe in the south. Data from the Indus Molasse suggest that sediments were still being deposited during the Miocene.