838 resultados para end-to-side
Resumo:
The performance of multiuser dual-hop relaying over mixed radio frequency/free-space optical (RF/FSO) links is investigated. RF links are used for the simultaneous data transmission from m single-antenna sources to the relay, which is equipped with n ≥ m receive antennas and a photo-aperture transmitter. The relay operates under the decode-and-forward protocol and utilizes the popular ordered V-BLAST technique to successively decode each user's transmitted stream. A common norm-based ordering approach is adopted, where the streams are decoded in an ascending order. After the V-BLAST decoding, the relay retransmits the initial information to the destination, which is equipped with a photo-detector, via a point-to-point FSO link in m consecutive timeslots. Analytical expressions for the end-to-end outage probability and average symbol error probability of each user are derived. Some engineering insights are manifested, such as the diversity order, the impact of the pointing error displacement on the FSO link and the severity on the turbulence-induced channel fading.
Resumo:
Encryption of personal data is widely regarded as a privacy preserving technology which could potentially play a key role for the compliance of innovative IT technology within the European data protection law framework. Therefore, in this paper, we examine the new EU General Data Protection Regulation’s relevant provisions regarding encryption – such as those for anonymisation and pseudonymisation – and assess whether encryption can serve as an anonymisation technique, which can lead to the non-applicability of the GDPR. However, the provisions of the GDPR regarding the material scope of the Regulation still leave space for legal uncertainty when determining whether a data subject is identifiable or not. Therefore, we inter alia assess the Opinion of the Advocate General of the European Court of Justice (ECJ) regarding a preliminary ruling on the interpretation of the dispute concerning whether a dynamic IP address can be considered as personal data, which may put an end to the dispute whether an absolute or a relative approach has to be used for the assessment of the identifiability of data subjects. Furthermore, we outline the issue of whether the anonymisation process itself constitutes a further processing of personal data which needs to have a legal basis in the GDPR. Finally, we give an overview of relevant encryption techniques and examine their impact upon the GDPR’s material scope.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Aim. We report a case of ulnar and palmar arch artery aneurysm in a 77 years old man without history of any occupational or recreational trauma, vasculitis, infections or congenital anatomic abnormalities. We also performed a computed search of literature in PUBMED using the keywords “ulnar artery aneurysm” and “palmar arch aneurysm”. Case report. A 77 years old male patient was admitted to hospital with a pulsing mass at distal right ulnar artery and deep palmar arch; at ultrasound and CT examination a saccular aneurysm of 35 millimeters at right ulnar artery and a 15 millimeters dilatation at deep palmar arch were detected. He was asymptomatic for distal embolization and pain. In local anesthesia ulnar artery and deep palmar arch dilatations were resected. Reconstruction of vessels was performed through an end-to-end microvascular repair. Histological examination confirmed the absence of vasculitis and collagenopaties. In postoperative period there were no clinical signs of peripheral ischemia, Allen’s test and ultrasound examination were normal. At follow-up of six months, the patient was still asymptomatic with a normal Allen test, no signs of distal digital ischemia and patency of treated vessel with normal flow at duplex ultrasound. Conclusion. True spontaneous aneurysms of ulnar artery and palmar arch are rare and can be successfully treated with resection and microvascular reconstruction.
Resumo:
Introduction. Acute intestinal obstruction in pregnancy is a rare, but life-threatening complication associated with high fetal and maternal mortality. Case report. A 20-year old gravida presented with a 24 hour history of several episodes of vomiting, complete constipation and severe crampy abdominal pain. The patient was admitted with the diagnosis of acute abdomen associated with septic shock. On examination echography showed distended intestinal loops and presence of free peritoneal fluid. Abdominal X-ray with shielding of the fetus revealed colonic air-fluid levels. The obstetrician consult diagnosed dead fetus in utero and was decided to operate immediately. On laparotomy was found complete cecal volvulus with gangrene of cecum, part of ascending colon and terminal ileum. A right hemicolectomy was performed with side to side ileotransverse anastomosis. Afterwards a lower segment cesarean section was made and a stillborn fetus was delivered. The patient made an uneventful recovery and was discharged on 9th postoperative day. Conclusion. Cecal volvulus during pregnancy is a rare, but serious surgical problem. Correct diagnosis may be difficult until exploratory laparotomy is performed. Undue delay in diagnosis and surgical treatment can increase the maternal and fetal mortality.
Resumo:
No presente trabalho procede-se a uma descrição sobre os aspetos considerados mais relevantes relativos à vida e à obra do extraordinário cientista e humanista do século XIX – Louis Pasteur. Contextualiza-se, também, em vários pontos, a sua obra com a de outros cientistas da época, enquadrando os trabalhos por si realizados em descobertas anteriores. Abordam-se os magníficos estudos feitos no âmbito da cristalografia (que muito contribuíram para a moderna estereoquímica) e da fermentação, como um mecanismo utilizado por certos microrganismos para produzir energia na ausência de oxigénio, facto totalmente inédito na época. Explica-se como Pasteur, de uma forma inteligentíssima, conseguiu pôr fim à velha teoria da geração espontânea. Refere-se como surgiu a genial ideia da pasteurização, termo em homenagem ao grande sábio, que veio a modificar toda a indústria do vinho, da cerveja e de tantos outros alimentos, estabelecendo a importância da microbiologia na indústria alimentar. Abordam-se os estudos realizados por Pasteur sobre doenças infeciosas (a pebrina, a cólera das galinhas, o carbúnculo e a raiva), incluindo os espetaculares procedimentos que conduziram à elaboração das primeiras vacinas que ensinaram aos cientistas mais novos a fabricar outras, salvando-se tantas vidas.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
We report a case of a 24-year-old woman who was delivered via cesarean section at 39 weeks and presented in the puerperium with symptoms of worsening abdominal pain and septicaemia. Preoperative ultrasonography suggested the presence of a pelvic collection. Explorative laparotomy revealed the simultaneous presence of Meckel's diverticulitis and appendicitis without bowel perforation. The patient made an uneventful recovery following small bowel resection with end to end reanastomosis and appendicectomy.
Resumo:
While news stories are an important traditional medium to broadcast and consume news, microblogging has recently emerged as a place where people can dis- cuss, disseminate, collect or report information about news. However, the massive information in the microblogosphere makes it hard for readers to keep up with these real-time updates. This is especially a problem when it comes to breaking news, where people are more eager to know “what is happening”. Therefore, this dis- sertation is intended as an exploratory effort to investigate computational methods to augment human effort when monitoring the development of breaking news on a given topic from a microblog stream by extractively summarizing the updates in a timely manner. More specifically, given an interest in a topic, either entered as a query or presented as an initial news report, a microblog temporal summarization system is proposed to filter microblog posts from a stream with three primary concerns: topical relevance, novelty, and salience. Considering the relatively high arrival rate of microblog streams, a cascade framework consisting of three stages is proposed to progressively reduce quantity of posts. For each step in the cascade, this dissertation studies methods that improve over current baselines. In the relevance filtering stage, query and document expansion techniques are applied to mitigate sparsity and vocabulary mismatch issues. The use of word embedding as a basis for filtering is also explored, using unsupervised and supervised modeling to characterize lexical and semantic similarity. In the novelty filtering stage, several statistical ways of characterizing novelty are investigated and ensemble learning techniques are used to integrate results from these diverse techniques. These results are compared with a baseline clustering approach using both standard and delay-discounted measures. In the salience filtering stage, because of the real-time prediction requirement a method of learning verb phrase usage from past relevant news reports is used in conjunction with some standard measures for characterizing writing quality. Following a Cranfield-like evaluation paradigm, this dissertation includes a se- ries of experiments to evaluate the proposed methods for each step, and for the end- to-end system. New microblog novelty and salience judgments are created, building on existing relevance judgments from the TREC Microblog track. The results point to future research directions at the intersection of social media, computational jour- nalism, information retrieval, automatic summarization, and machine learning.
Resumo:
With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.
Resumo:
Magnetically-induced forces on the inertial masses on-board LISA Path finder are expected to be one of the dominant contributions to the mission noise budget, accounting for up to 40%. The origin of this disturbance is the coupling of the residual magnetization and susceptibility of the test masses with the environmental magnetic field. In order to fully understand this important part of the noise model, a set of coils and magnetometers are integrated as a part of the diagnostics subsystem. During operations a sequence of magnetic excitations will be applied to precisely determine the coupling of the magnetic environment to the test mass displacement using the on-board magnetometers. Since no direct measurement of the magnetic field in the test mass position will be available, an extrapolation of the magnetic measurements to the test mass position will be carried out as a part of the data analysis activities. In this paper we show the first results on the magnetic experiments during an end-to-end LISA Path finder simulation, and we describe the methods under development to map the magnetic field on-board.
Resumo:
Thermal Diagnostics experiments to be carried out on board LISA Pathfinder (LPF) will yield a detailed characterisation of how temperature fluctuations affect the LTP (LISA Technology Package) instrument performance, a crucial information for future space based gravitational wave detectors as the proposed eLISA. Amongst them, the study of temperature gradient fluctuations around the test masses of the Inertial Sensors will provide as well information regarding the contribution of the Brownian noise, which is expected to limit the LTP sensitivity at frequencies close to 1mHz during some LTP experiments. In this paper we report on how these kind of Thermal Diagnostics experiments were simulated in the last LPF Simulation Campaign (November, 2013) involving all the LPF Data Analysis team and using an end-to-end simulator of the whole spacecraft. Such simulation campaign was conducted under the framework of the preparation for LPF operations.
Resumo:
The purpose of this paper is to survey and assess the state-of-the-art in automatic target recognition for synthetic aperture radar imagery (SAR-ATR). The aim is not to develop an exhaustive survey of the voluminous literature, but rather to capture in one place the various approaches for implementing the SAR-ATR system. This paper is meant to be as self-contained as possible, and it approaches the SAR-ATR problem from a holistic end-to-end perspective. A brief overview for the breadth of the SAR-ATR challenges is conducted. This is couched in terms of a single-channel SAR, and it is extendable to multi-channel SAR systems. Stages pertinent to the basic SAR-ATR system structure are defined, and the motivations of the requirements and constraints on the system constituents are addressed. For each stage in the SAR-ATR processing chain, a taxonomization methodology for surveying the numerous methods published in the open literature is proposed. Carefully selected works from the literature are presented under the taxa proposed. Novel comparisons, discussions, and comments are pinpointed throughout this paper. A two-fold benchmarking scheme for evaluating existing SAR-ATR systems and motivating new system designs is proposed. The scheme is applied to the works surveyed in this paper. Finally, a discussion is presented in which various interrelated issues, such as standard operating conditions, extended operating conditions, and target-model design, are addressed. This paper is a contribution toward fulfilling an objective of end-to-end SAR-ATR system design.
Resumo:
The last couple of decades have been the stage for the introduction of new telecommunication networks. It is expected that in the future all types of vehicles, such as cars, buses and trucks have the ability to intercommunicate and form a vehicular network. Vehicular networks display particularities when compared to other networks due to their continuous node mobility and their wide geographical dispersion, leading to a permanent network fragmentation. Therefore, the main challenges that this type of network entails relate to the intermittent connectivity and the long and variable delay in information delivery. To address the problems related to the intermittent connectivity, a new concept was introduced – Delay Tolerant Network (DTN). This architecture is built on a Store-Carry-and-Forward (SCF) mechanism in order to assure the delivery of information when there is no end-to-end path defined. Vehicular networks support a multiplicity of services, including the transportation of non-urgent information. Therefore, it is possible to conclude that the use of a DTN for the dissemination of non-urgent information is able to surpass the aforementioned challenges. The work developed focused on the use of DTNs for the dissemination of non-urgent information. This information is originated in the network service provider and should be available on mobile network terminals during a limited period of time. In order to do so, four different strategies were deployed: Random, Least Number of Hops First (LNHF), Local Rarest Bundle First (LRBF) e Local Rarest Generation First (LRGF). All of these strategies have a common goal: to disseminate content into the network in the shortest period of time and minimizing network congestion. This work also contemplates the analysis and implementation of techniques that reduce network congestion. The design, implementation and validation of the proposed strategies was divided into three stages. The first stage focused on creating a Matlab emulator for the fast implementation and strategy validation. This stage resulted in the four strategies that were afterwards implemented in the DTNs software Helix – developed in a partnership between Instituto de Telecomunicac¸˜oes (IT) and Veniam R , which are responsible for the largest operating vehicular network worldwide that is located in Oporto city. The strategies were later evaluated on an emulator that was built for the largescale testing of DTN. Both emulators account for vehicular mobility based on information previously collected from the real platform. Finally, the strategy that presented the best overall performance was tested on a real platform – in a lab environment – for concept and operability demonstration. It is possible to conclude that two of the implemented strategies (LRBF and LRGF) can be deployed in the real network and guarantee a significant delivery rate. The LRBF strategy has the best performance in terms of delivery. However, it needs to add a significant overhead to the network in order to work. In the future, tests of scalability should be conducted in a real environment in order to confirm the emulator results. The real implementation of the strategies should be accompanied by the introduction of new types of services for content distribution.