801 resultados para Computational Thinking
Resumo:
Information Technology and Communications (ICT) is presented as the main element in order to achieve more efficient and sustainable city resource management, while making sure that the needs of the citizens to improve their quality of life are satisfied. A key element will be the creation of new systems that allow the acquisition of context information, automatically and transparently, in order to provide it to decision support systems. In this paper, we present a novel distributed system for obtaining, representing and providing the flow and movement of people in densely populated geographical areas. In order to accomplish these tasks, we propose the design of a smart sensor network based on RFID communication technologies, reliability patterns and integration techniques. Contrary to other proposals, this system represents a comprehensive solution that permits the acquisition of user information in a transparent and reliable way in a non-controlled and heterogeneous environment. This knowledge will be useful in moving towards the design of smart cities in which decision support on transport strategies, business evaluation or initiatives in the tourism sector will be supported by real relevant information. As a final result, a case study will be presented which will allow the validation of the proposal.
Resumo:
This paper presents an approach to the belief system based on a computational framework in three levels: first, the logic level with the definition of binary local rules, second, the arithmetic level with the definition of recursive functions and finally the behavioural level with the definition of a recursive construction pattern. Social communication is achieved when different beliefs are expressed, modified, propagated and shared through social nets. This approach is useful to mimic the belief system because the defined functions provide different ways to process the same incoming information as well as a means to propagate it. Our model also provides a means to cross different beliefs so, any incoming information can be processed many times by the same or different functions as it occurs is social nets.
Resumo:
Azomethine ylides, generated from imine-derived O-cinnamyl or O-crotonyl salicylaldeyde and α-amino acids, undergo intramolecular 1,3-dipolar cycloaddition, leading to chromene[4,3-b]pyrrolidines. Two reaction conditions are used: (a) microwave-assisted heating (200 W, 185 °C) of a neat mixture of reagents, and (b) conventional heating (170 °C) in PEG-400 as solvent. In both cases, a mixture of two epimers at the α-position of the nitrogen atom in the pyrrolidine nucleus was formed through the less energetic endo-approach (B/C ring fusion). In many cases, the formation of the stereoisomer bearing a trans-arrangement into the B/C ring fusion was observed in high proportions. Comprehensive computational and kinetic simulation studies are detailed. An analysis of the stability of transient 1,3-dipoles, followed by an assessment of the intramolecular pathways and kinetics are also reported.
Resumo:
The sustainability strategy in urban spaces arises from reflecting on how to achieve a more habitable city and is materialized in a series of sustainable transformations aimed at humanizing different environments so that they can be used and enjoyed by everyone without exception and regardless of their ability. Modern communication technologies allow new opportunities to analyze efficiency in the use of urban spaces from several points of view: adequacy of facilities, usability, and social integration capabilities. The research presented in this paper proposes a method to perform an analysis of movement accessibility in sustainable cities based on radio frequency technologies and the ubiquitous computing possibilities of the new Internet of Things paradigm. The proposal can be deployed in both indoor and outdoor environments to check specific locations of a city. Finally, a case study in a controlled context has been simulated to validate the proposal as a pre-deployment step in urban environments.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
In this work, a modified version of the elastic bunch graph matching (EBGM) algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM) framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.
Resumo:
One folio-sized leaf containing a handwritten essay responding to an unidentified opponent's claims that "thinking is essential to the soul." The response begins with the introduction, "In the consideration of this question, I shall only examine one or two of the most material objects of our antagonist." The verso is inscribed: "2d Forensic. not read."
Resumo:
Folio-sized leaf containing a handwritten disputation arguing that "the mind is active in thinking." The essay begins, "Since I am obliged by academical institution to engage in a dispute..."
Resumo:
Hypertrophic cardiomyopathy (HCM) is a cardiovascular disease where the heart muscle is partially thickened and blood flow is - potentially fatally - obstructed. It is one of the leading causes of sudden cardiac death in young people. Electrocardiography (ECG) and Echocardiography (Echo) are the standard tests for identifying HCM and other cardiac abnormalities. The American Heart Association has recommended using a pre-participation questionnaire for young athletes instead of ECG or Echo tests due to considerations of cost and time involved in interpreting the results of these tests by an expert cardiologist. Initially we set out to develop a classifier for automated prediction of young athletes’ heart conditions based on the answers to the questionnaire. Classification results and further in-depth analysis using computational and statistical methods indicated significant shortcomings of the questionnaire in predicting cardiac abnormalities. Automated methods for analyzing ECG signals can help reduce cost and save time in the pre-participation screening process by detecting HCM and other cardiac abnormalities. Therefore, the main goal of this dissertation work is to identify HCM through computational analysis of 12-lead ECG. ECG signals recorded on one or two leads have been analyzed in the past for classifying individual heartbeats into different types of arrhythmia as annotated primarily in the MIT-BIH database. In contrast, we classify complete sequences of 12-lead ECGs to assign patients into two groups: HCM vs. non-HCM. The challenges and issues we address include missing ECG waves in one or more leads and the dimensionality of a large feature-set. We address these by proposing imputation and feature-selection methods. We develop heartbeat-classifiers by employing Random Forests and Support Vector Machines, and propose a method to classify full 12-lead ECGs based on the proportion of heartbeats classified as HCM. The results from our experiments show that the classifiers developed using our methods perform well in identifying HCM. Thus the two contributions of this thesis are the utilization of computational and statistical methods for discovering shortcomings in a current screening procedure and the development of methods to identify HCM through computational analysis of 12-lead ECG signals.
Resumo:
Stroke is a leading cause of death and permanent disability worldwide, affecting millions of individuals. Traditional clinical scores for assessment of stroke-related impairments are inherently subjective and limited by inter-rater and intra-rater reliability, as well as floor and ceiling effects. In contrast, robotic technologies provide objective, highly repeatable tools for quantification of neurological impairments following stroke. KINARM is an exoskeleton robotic device that provides objective, reliable tools for assessment of sensorimotor, proprioceptive and cognitive brain function by means of a battery of behavioral tasks. As such, KINARM is particularly useful for assessment of neurological impairments following stroke. This thesis introduces a computational framework for assessment of neurological impairments using the data provided by KINARM. This is done by achieving two main objectives. First, to investigate how robotic measurements can be used to estimate current and future abilities to perform daily activities for subjects with stroke. We are able to predict clinical scores related to activities of daily living at present and future time points using a set of robotic biomarkers. The findings of this analysis provide a proof of principle that robotic evaluation can be an effective tool for clinical decision support and target-based rehabilitation therapy. The second main objective of this thesis is to address the emerging problem of long assessment time, which can potentially lead to fatigue when assessing subjects with stroke. To address this issue, we examine two time reduction strategies. The first strategy focuses on task selection, whereby KINARM tasks are arranged in a hierarchical structure so that an earlier task in the assessment procedure can be used to decide whether or not subsequent tasks should be performed. The second strategy focuses on time reduction on the longest two individual KINARM tasks. Both reduction strategies are shown to provide significant time savings, ranging from 30% to 90% using task selection and 50% using individual task reductions, thereby establishing a framework for reduction of assessment time on a broader set of KINARM tasks. All in all, findings of this thesis establish an improved platform for diagnosis and prognosis of stroke using robot-based biomarkers.
Resumo:
My thesis thinks through the ways Newtonian logics require linear mobility in order to produce narratives of progress. I argue that this linear mobility, and the resulting logics, potentially erases the chaotic and non-linear motions that are required to navigate a colonial landscape. I suggest that these non-linear movements produce important critiques of the seeming stasis of colonial constructs and highlight the ways these logics must appear neutral and scientific in an attempt to conceal the constant and complex adjustments these frameworks require. In order to make room for these complex motions, I develop a quantum intervention. Specifically, I use quantum physics as a metaphor to think through the significance of black life, the double-consciousness ofland, and the intricate motions of sound. In order to put forth this intervention, I look at news coverage of Hurricane Katrina, Du Bois’s characterization of land in Souls of Black Folks, and the aural mobilities of blackness articulated in an academic discussion and interview about post- humanism.
Resumo:
In addition to the euro crisis the EU faces a second, more existential crisis, in the form of an ill-defined notion of the Union’s global role. This contribution argues that the euro crisis should not redefine perceptions of the EU on the global stage, which it is in danger of doing. Instead, the EU and its members should embark upon a strategic reassessment in order to define three core interrelated factors. First, the nature of the EU’s actorness remains ill-defined and it is therefore necessary to explain, both within and beyond the Union, what its global role is. Second, in order to facilitate the joining up of the myriad of sub-strategies in EU external relations, the notion of ‘red lines’ should be considered which define specific aspects of behaviour that are mainstreamed throughout the EU’s external actions and, more importantly, upheld. Third, in spite of the rapid development of the harder elements of the EU’s actorness over the last decade or so, there remains a worrying gap between rhetoric and reality. This aspect is of particular concern for the United States and will affect perceptions of the EU’s ability to be a genuine strategic partner at a time of dramatic change in the international system. By engaging in what will inevitably be a difficult debate, the EU and its members will not only help give purpose and strategic direction to the Union’s actions on the international scene, it will also speak to the euro crisis since both are fundamentally about the future shape and direction of European integration.
Energy and climate - What is the new European Commission thinking? EPC Commentary, 30 September 2014
Resumo:
A visible change of priorities and re-structuring of portfolios in the new European Commission have raised questions about related policy implications especially for climate and energy policies. On the one hand, it is seen that the new structure with Vice Presidents as team leaders for groups of Commissioners could encourage much needed policy coordination between policy areas, such as climate and energy policies. At the same time there are questions over what this could mean for political priorities, to what extent the Vice Presidents will be able to guide policy-making and how responsibilities will be divided. No matter what the structure of the Commission, it is in the EU’s interest to ensure that its climate and energy policies form a framework for action that helps to reduce global emissions, fight climate change locally and globally, secure energy supplies, promote wider socio-economic interests and increase competitiveness – all at the same time.