825 resultados para interrogation to decide whether person appropriate party to proceeding


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In January 2014, for the first time in its history, the German Federal Constitutional Court submitted several questions to the European Court of Justice (ECJ) in Luxembourg and asked for a preliminary ruling. The questions had arisen within the framework of the OMT case, and the issue was whether or not the OMT (“outright monetary transactions”) programme announced by Mario Draghi, the head of the European Central Bank (ECB), is in compliance with the law of the European Union. The OMT programme (which has be-come well-known because Draghi said “what-ever it takes to preserve the euro” when he unveiled it) plays an important role in the stabilization of the euro area. It means that the European System of Central Banks will be empowered to engage in unlimited buying of government bonds issued by certain Member States if and as long as these Member States are simultaneously taking part in a European rescue or reform programme (under the EFSF ot the ESM). Hitherto the OMT has not been implemented. Nonetheless a suit contesting its legality was filed with the Federal Constitutional Court. The European Court of Justice now had to decide whether or not the activities of the ECB were in compliance with European law. How-ever, the ECJ had to take into account the prior assessment of the Federal Constitutional Court. In its submission the Federal Constitutional Court made it quite clear that it was of the opinion that there has been a violation of European law. But at the same time it did not exclude the possibility that the ECJ set up legal conditions for OMT in order to avoid a violation of European law.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

At the 18 March EU-Turkey Migration Summit EU leaders pledged to lift visa requirements for Turkish citizens travelling to the Schengen zone by the end of June 2016 if Ankara met the required 72 benchmarks. On 4 May the European Commission will decide whether or not Turkey has done enough. The stakes are high because Turkey has threatened to cancel the readmission agreement, which is central to the success of the migration deal, if the EU fails to deliver.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research on sensory processing or the way animals see, hear, smell, taste, feel and electrically and magnetically sense their environment has advanced a great deal over the last fifteen years. This book discusses the most important themes that have emerged from recent research and provides a summary of likely future directions. The book starts with two sections on the detection of sensory signals over long and short ranges by aquatic animals, covering the topics of navigation, communication, and finding food and other localized sources. The next section, the co-evolution of signal and sense, deals with how animals decide whether the source is prey, predator or mate by utilizing receptors that have evolved to take full advantage of the acoustical properties of the signal. Organisms living in the deep-sea environment have also received a lot of recent attention, so the next section deals with visual adaptations to limited light environments where sunlight is replaced by bioluminescence and the visual system has undergone changes to optimize light capture and sensitivity. The last section on central co-ordination of sensory systems covers how signals are processed and filtered for use by the animal. This book will be essential reading for all researchers and graduate students interested in sensory systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research involves a study of the questions, "what is considered safe", how are safety levels defined or decided, and according to whom. Tolerable or acceptable risk questions raise various issues: about values and assumptions inherent in such levels; about decision-making frameworks at the highest level of policy making as well as on the individual level; and about the suitability and competency of decision-makers to decide and to communicate their decisions. The wide-ranging topics covering philosophical and practical concerns examined in the literature review reveal the multi-disciplined scope of this research. To support this theoretical study empirical research was undertaken at the European Space Research and Technology Centre (ESTEC) of the European Space Agency (ESA). ESTEC is a large, multi-nationality, high technology organisation which presented an ideal case study for exploring how decisions are made with respect to safety from a personal as well as organisational aspect. A qualitative methodology was employed to gather, analyse and report the findings of this research. Significant findings reveal how experts perceive risks and the prevalence of informal decision-making processes partly due to the inadequacy of formal methods for deciding risk tolerability. In the field of occupational health and safety, this research has highlighted the importance and need for criteria to decide whether a risk is great enough to warrant attention in setting standards and priorities for risk control and resources. From a wider perspective and with the recognition that risk is an inherent part of life, the establishment of tolerability risk levels can be viewed as cornerstones indicating our progress, expectations and values, of life and work, in an increasingly litigious, knowledgeable and global society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advent of GPS enabled smartphones, an increasing number of users is actively sharing their location through a variety of applications and services. Along with the continuing growth of Location-Based Social Networks (LBSNs), security experts have increasingly warned the public of the dangers of exposing sensitive information such as personal location data. Most importantly, in addition to the geographical coordinates of the user’s location, LBSNs allow easy access to an additional set of characteristics of that location, such as the venue type or popularity. In this paper, we investigate the role of location semantics in the identification of LBSN users. We simulate a scenario in which the attacker’s goal is to reveal the identity of a set of LBSN users by observing their check-in activity. We then propose to answer the following question: what are the types of venues that a malicious user has to monitor to maximize the probability of success? Conversely, when should a user decide whether to make his/her check-in to a location public or not? We perform our study on more than 1 million check-ins distributed over 17 urban regions of the United States. Our analysis shows that different types of venues display different discriminative power in terms of user identity, with most of the venues in the “Residence” category providing the highest re-identification success across the urban regions. Interestingly, we also find that users with a high entropy of their check-ins distribution are not necessarily the hardest to identify, suggesting that it is the collective behaviour of the users’ population that determines the complexity of the identification task, rather than the individual behaviour.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to assess the knowledge of public school administrators with respect to special education (ESE) law. The study used a sample of 220 public school administrators. A survey instrument was developed consisting of 19 demographic questions and 20 situational scenarios. The scenarios were based on ESE issues of discipline, due process (including IEP procedures), identification, evaluation, placement, and related services. The participants had to decide whether a violation of the ESE child's rights had occurred by marking: (a) Yes, (b) No, or (c) Undecided. An analysis of the scores and demographic information was done using a two-way analysis of variance, chi-square, and crosstabs after a 77% survey response rate.^ Research questions addressed the administrators' overall level of knowledge. Comparisons were made between principals and assistant principals and differences between the levels of schooling. Exploratory questions were concerned with ESE issues deemed problematic by administrators, effects of demographic variables on survey scores, and the listing of resources utilized by administrators to access ESE information.^ The study revealed: (a) a significant difference was found when comparing the number of ESE courses taken and the score on the survey, (b) the top five resources of ESE information were the region office, school ESE department chairs, ESE teachers, county workshops, and county inservices, (c) problematic areas included discipline, evaluation procedures, placement issues, and IEP due process concerns, (d) administrators as a group did not exhibit a satisfactory knowledge of ESE law with a mean score of 12 correct and 74% of responding administrators scoring in the unsatisfactory level (below 70%), (e) across school levels, elementary administrators scored significantly higher than high school administrators, and (f) a significant implication that assistant principals consistently scored higher than principals on each scenario with a significant difference at the high school level.^ The study reveals a vital need for administrators to receive additional preparation in order to possess a basic understanding of ESE school law and how it impacts their respective schools and school districts so that they might meet professional obligations and protect the rights of all individuals involved. Recommendations for this additional administrative preparation and further research topics were discussed. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In response to a crime epidemic afflicting Latin America since the early 1990s, several countries in the region have resorted to using heavy-force police or military units to physically retake territories de facto controlled by non-State criminal or insurgent groups. After a period of territory control, the heavy forces hand law enforcement functions in the retaken territories to regular police officers, with the hope that the territories and their populations will remain under the control of the state. To a varying degree, intensity, and consistency, Brazil, Colombia, Mexico, and Jamaica have adopted such policies since the mid-1990s. During such operations, governments need to pursue two interrelated objectives: to better establish the state’s physical presence and to realign the allegiance of the population in those areas toward the state and away from the non-State criminal entities. From the perspective of law enforcement, such operations entail several critical decisions and junctions, such as: Whether or not to announce the force insertion in advance. The decision trades off the element of surprise and the ability to capture key leaders of the criminal organizations against the ability to minimize civilian casualties and force levels. The latter, however, may allow criminals to go to ground and escape capture. Governments thus must decide whether they merely seek to displace criminal groups to other areas or maximize their decapitation capacity. Intelligence flows rarely come from the population. Often, rival criminal groups are the best source of intelligence. However, cooperation between the State and such groups that goes beyond using vetted intelligence provided by the groups, such as a State tolerance for militias, compromises the rule-of-law integrity of the State and ultimately can eviscerate even public safety gains. Sustaining security after initial clearing operations is at times even more challenging than conducting the initial operations. Although unlike the heavy forces, traditional police forces, especially if designed as community police, have the capacity to develop trust of the community and ultimately focus on crime prevention, developing such trust often takes a long time. To develop the community’s trust, regular police forces need to conduct frequent on-foot patrols with intensive nonthreatening interactions with the population and minimize the use of force. Moreover, sufficiently robust patrol units need to be placed in designated beats for substantial amount of time, often at least over a year. Establishing oversight mechanisms, including joint police-citizens’ boards, further facilities building trust in the police among the community. After disruption of the established criminal order, street crime often significantly rises and both the heavy-force and community-police units often struggle to contain it. The increase in street crime alienates the population of the retaken territory from the State. Thus developing a capacity to address street crime is critical. Moreover, the community police units tend to be vulnerable (especially initially) to efforts by displaced criminals to reoccupy the cleared territories. Losing a cleared territory back to criminal groups is extremely costly in terms of losing any established trust and being able to recover it. Rather than operating on a priori determined handover schedule, a careful assessment of the relative strength of regular police and criminal groups post-clearing operations is likely to be a better guide for timing the handover from heavy forces to regular police units. Cleared territories often experience not only a peace dividend, but also a peace deficit – in the rise new serious crime (in addition to street crime). Newly – valuable land and other previously-inaccessible resources can lead to land speculation and forced displacement; various other forms of new crime can also significantly rise. Community police forces often struggle to cope with such crime, especially as it is frequently linked to legal business. Such new crime often receives little to no attention in the design of the operations to retake territories from criminal groups. But without developing an effective response to such new crime, the public safety gains of the clearing operations can be altogether lost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes the development of an open-source system for virtual bronchoscopy used in combination with electromagnetic instrument tracking. The end application is virtual navigation of the lung for biopsy of early stage cancer nodules. The open-source platform 3D Slicer was used for creating freely available algorithms for virtual bronchscopy. Firstly, the development of an open-source semi-automatic algorithm for prediction of solitary pulmonary nodule malignancy is presented. This approach may help the physician decide whether to proceed with biopsy of the nodule. The user-selected nodule is segmented in order to extract radiological characteristics (i.e., size, location, edge smoothness, calcification presence, cavity wall thickness) which are combined with patient information to calculate likelihood of malignancy. The overall accuracy of the algorithm is shown to be high compared to independent experts' assessment of malignancy. The algorithm is also compared with two different predictors, and our approach is shown to provide the best overall prediction accuracy. The development of an airway segmentation algorithm which extracts the airway tree from surrounding structures on chest Computed Tomography (CT) images is then described. This represents the first fundamental step toward the creation of a virtual bronchoscopy system. Clinical and ex-vivo images are used to evaluate performance of the algorithm. Different CT scan parameters are investigated and parameters for successful airway segmentation are optimized. Slice thickness is the most affecting parameter, while variation of reconstruction kernel and radiation dose is shown to be less critical. Airway segmentation is used to create a 3D rendered model of the airway tree for virtual navigation. Finally, the first open-source virtual bronchoscopy system was combined with electromagnetic tracking of the bronchoscope for the development of a GPS-like system for navigating within the lungs. Tools for pre-procedural planning and for helping with navigation are provided. Registration between the lungs of the patient and the virtually reconstructed airway tree is achieved using a landmark-based approach. In an attempt to reduce difficulties with registration errors, we also implemented a landmark-free registration method based on a balanced airway survey. In-vitro and in-vivo testing showed good accuracy for this registration approach. The centreline of the 3D airway model is extracted and used to compensate for possible registration errors. Tools are provided to select a target for biopsy on the patient CT image, and pathways from the trachea towards the selected targets are automatically created. The pathways guide the physician during navigation, while distance to target information is updated in real-time and presented to the user. During navigation, video from the bronchoscope is streamed and presented to the physician next to the 3D rendered image. The electromagnetic tracking is implemented with 5 DOF sensing that does not provide roll rotation information. An intensity-based image registration approach is implemented to rotate the virtual image according to the bronchoscope's rotations. The virtual bronchoscopy system is shown to be easy to use and accurate in replicating the clinical setting, as demonstrated in the pre-clinical environment of a breathing lung method. Animal studies were performed to evaluate the overall system performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stroke is a leading cause of death and permanent disability worldwide, affecting millions of individuals. Traditional clinical scores for assessment of stroke-related impairments are inherently subjective and limited by inter-rater and intra-rater reliability, as well as floor and ceiling effects. In contrast, robotic technologies provide objective, highly repeatable tools for quantification of neurological impairments following stroke. KINARM is an exoskeleton robotic device that provides objective, reliable tools for assessment of sensorimotor, proprioceptive and cognitive brain function by means of a battery of behavioral tasks. As such, KINARM is particularly useful for assessment of neurological impairments following stroke. This thesis introduces a computational framework for assessment of neurological impairments using the data provided by KINARM. This is done by achieving two main objectives. First, to investigate how robotic measurements can be used to estimate current and future abilities to perform daily activities for subjects with stroke. We are able to predict clinical scores related to activities of daily living at present and future time points using a set of robotic biomarkers. The findings of this analysis provide a proof of principle that robotic evaluation can be an effective tool for clinical decision support and target-based rehabilitation therapy. The second main objective of this thesis is to address the emerging problem of long assessment time, which can potentially lead to fatigue when assessing subjects with stroke. To address this issue, we examine two time reduction strategies. The first strategy focuses on task selection, whereby KINARM tasks are arranged in a hierarchical structure so that an earlier task in the assessment procedure can be used to decide whether or not subsequent tasks should be performed. The second strategy focuses on time reduction on the longest two individual KINARM tasks. Both reduction strategies are shown to provide significant time savings, ranging from 30% to 90% using task selection and 50% using individual task reductions, thereby establishing a framework for reduction of assessment time on a broader set of KINARM tasks. All in all, findings of this thesis establish an improved platform for diagnosis and prognosis of stroke using robot-based biomarkers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In our daily lives, we often must predict how well we are going to perform in the future based on an evaluation of our current performance and an assessment of how much we will improve with practice. Such predictions can be used to decide whether to invest our time and energy in learning and, if we opt to invest, what rewards we may gain. This thesis investigated whether people are capable of tracking their own learning (i.e. current and future motor ability) and exploiting that information to make decisions related to task reward. In experiment one, participants performed a target aiming task under a visuomotor rotation such that they initially missed the target but gradually improved. After briefly practicing the task, they were asked to select rewards for hits and misses applied to subsequent performance in the task, where selecting a higher reward for hits came at a cost of receiving a lower reward for misses. We found that participants made decisions that were in the direction of optimal and therefore demonstrated knowledge of future task performance. In experiment two, participants learned a novel target aiming task in which they were rewarded for target hits. Every five trials, they could choose a target size which varied inversely with reward value. Although participants’ decisions deviated from optimal, a model suggested that they took into account both past performance, and predicted future performance, when making their decisions. Together, these experiments suggest that people are capable of tracking their own learning and using that information to make sensible decisions related to reward maximization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Si se pretende elaborar un diccionario de adjetivos, ya sea este monolingüe o bilingüe, la primera tarea que se le impone al lexicógrafo es la de definir qué es un adjetivo, una cuestión que todavía hoy no ha sido resuelta satisfactoriamente. En alemán hay una serie de palabras que han sido descritas tradicionalmente como adjetivos en función exclusivamente predicativa, cuyo estatus como adjetivos es, sin embargo, cuestionado por algunos autores. En este artículo se trata de dilucidar si estas palabras realmente solo pueden aparecer en función predicativa, cómo se las describe en diccionarios y gramáticas y cuáles son sus principales correspondencias en español, a fin de decidir si deberían ser incluidas en un corpus destinado a la elaboración de un diccionario sintáctico de adjetivos alemán-español.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis analyzes the teaching material Caminando 3 from a neurodidactic perspective. The discipline of neurodidactics is young and controversial and the aim of this investigation is to present an understanding of the use of practical applications of neurodidactics in written material of education in Spanish as a second language. The methods applied are a quantitative and objective analysis of measurable aspects in Caminando 3 and a qualitative, subjective analysis which is interpreting fundamental understandings of language learning which Caminando 3 reflects in its structure and content. The results show that there are several aspects in Caminando 3 which are supported by the evidences and advices of the neurodidactic theories and simultaneously there are details in exercises and parts of the layup containing a level of repetition and prediction which are directly harmful to the process of acquisition, according to neurodidactic theories. The conclusion is that a more dynamic teaching book can be produced with guidelines that can be created using these results. Further investigations will have to measure the knowledge and success of the pupils to decide whether neurodidactics is a successful complement to traditional didactics of second language learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context: Mobile applications support a set of user-interaction features that are independent of the application logic. Rotating the device, scrolling, or zooming are examples of such features. Some bugs in mobile applications can be attributed to user-interaction features. Objective: This paper proposes and evaluates a bug analyzer based on user-interaction features that uses digital image processing to find bugs. Method: Our bug analyzer detects bugs by comparing the similarity between images taken before and after a user-interaction. SURF, an interest point detector and descriptor, is used to compare the images. To evaluate the bug analyzer, we conducted a case study with 15 randomly selected mobile applications. First, we identified user-interaction bugs by manually testing the applications. Images were captured before and after applying each user-interaction feature. Then, image pairs were processed with SURF to obtain interest points, from which a similarity percentage was computed, to finally decide whether there was a bug. Results: We performed a total of 49 user-interaction feature tests. When manually testing the applications, 17 bugs were found, whereas when using image processing, 15 bugs were detected. Conclusions: 8 out of 15 mobile applications tested had bugs associated to user-interaction features. Our bug analyzer based on image processing was able to detect 88% (15 out of 17) of the user-interaction bugs found with manual testing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

My thesis consists of three essays that investigate strategic interactions between individuals engaging in risky collective action in uncertain environments. The first essay analyzes a broad class of incomplete information coordination games with a wide range of applications in economics and politics. The second essay draws from the general model developed in the first essay to study decisions by individuals of whether to engage in protest/revolution/coup/strike. The final essay explicitly integrates state response to the analysis. The first essay, Coordination Games with Strategic Delegation of Pivotality, exhaustively analyzes a class of binary action, two-player coordination games in which players receive stochastic payoffs only if both players take a ``stochastic-coordination action''. Players receive conditionally-independent noisy private signals about the normally distributed stochastic payoffs. With this structure, each player can exploit the information contained in the other player's action only when he takes the “pivotalizing action”. This feature has two consequences: (1) When the fear of miscoordination is not too large, in order to utilize the other player's information, each player takes the “pivotalizing action” more often than he would based solely on his private information, and (2) best responses feature both strategic complementarities and strategic substitutes, implying that the game is not supermodular nor a typical global game. This class of games has applications in a wide range of economic and political phenomena, including war and peace, protest/revolution/coup/ strike, interest groups lobbying, international trade, and adoption of a new technology. My second essay, Collective Action with Uncertain Payoffs, studies the decision problem of citizens who must decide whether to submit to the status quo or mount a revolution. If they coordinate, they can overthrow the status quo. Otherwise, the status quo is preserved and participants in a failed revolution are punished. Citizens face two types of uncertainty. (a) non-strategic: they are uncertain about the relative payoffs of the status quo and revolution, (b) strategic: they are uncertain about each other's assessments of the relative payoff. I draw on the existing literature and historical evidence to argue that the uncertainty in the payoffs of status quo and revolution is intrinsic in politics. Several counter-intuitive findings emerge: (1) Better communication between citizens can lower the likelihood of revolution. In fact, when the punishment for failed protest is not too harsh and citizens' private knowledge is accurate, then further communication reduces incentives to revolt. (2) Increasing strategic uncertainty can increase the likelihood of revolution attempts, and even the likelihood of successful revolution. In particular, revolt may be more likely when citizens privately obtain information than when they receive information from a common media source. (3) Two dilemmas arise concerning the intensity and frequency of punishment (repression), and the frequency of protest. Punishment Dilemma 1: harsher punishments may increase the probability that punishment is materialized. That is, as the state increases the punishment for dissent, it might also have to punish more dissidents. It is only when the punishment is sufficiently harsh, that harsher punishment reduces the frequency of its application. Punishment Dilemma 1 leads to Punishment Dilemma 2: the frequencies of repression and protest can be positively or negatively correlated depending on the intensity of repression. My third essay, The Repression Puzzle, investigates the relationship between the intensity of grievances and the likelihood of repression. First, I make the observation that the occurrence of state repression is a puzzle. If repression is to succeed, dissidents should not rebel. If it is to fail, the state should concede in order to save the costs of unsuccessful repression. I then propose an explanation for the “repression puzzle” that hinges on information asymmetries between the state and dissidents about the costs of repression to the state, and hence the likelihood of its application by the state. I present a formal model that combines the insights of grievance-based and political process theories to investigate the consequences of this information asymmetry for the dissidents' contentious actions and for the relationship between the magnitude of grievances (formulated here as the extent of inequality) and the likelihood of repression. The main contribution of the paper is to show that this relationship is non-monotone. That is, as the magnitude of grievances increases, the likelihood of repression might decrease. I investigate the relationship between inequality and the likelihood of repression in all country-years from 1981 to 1999. To mitigate specification problem, I estimate the probability of repression using a generalized additive model with thin-plate splines (GAM-TPS). This technique allows for flexible relationship between inequality, the proxy for the costs of repression and revolutions (income per capita), and the likelihood of repression. The empirical evidence support my prediction that the relationship between the magnitude of grievances and the likelihood of repression is non-monotone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kompetenzraster sind pädagogische Instrumente, die zum kompetenzorientierten, individualisierten und selbstgesteuerten Lernen in beruflichen Schulen eingesetzt werden. Sie werden üblicherweise im Rahmen eines pädagogischen Gesamtkonzeptes genutzt, indem die Raster oft ein zentrales Instrument in einem komplexen Gefüge schulischer Lern- und Lehrprozesse sind. Kompetenzraster sind häufig der Fixpunkt, an dem sich andere Instrumente (wie Checklisten und Lernjobs) orientieren und sie definieren die Ausgangs- und Zielpunkte der Lernprozesse. Dabei werden den Schülern üblicherweise Freiheitsgrade eingeräumt, so dass sie (mit-) entscheiden ob, was, wann, wie und woraufhin sie lernen. Die schulische Arbeit mit den Rastern kann als ein Versuch angesehen werden, die Lernenden in den Mittelpunkt pädagogischen Denkens und Handelns zu stellen. Dieser Beitrag hat das Ziel, selbstgesteuertes Lernen aus einer distanzierten, vom einzelnen pragmatischen Modell abstrahierenden und eher theoretischen Perspektive auf das individualisierte Lernen mit Kompetenzrastern zu beziehen. Im Kern wird ein Systematisierungsansatz entwickelt, in dem die komplexen Zusammenhänge des Lernens mit Kompetenzrastern im Kontext von selbstgesteuertem Lernen dargestellt werden. Damit soll ein Beitrag zur Elaboration des Lernens mit Kompetenzrastern in beruflichen Schulen geleistet werden. Konkret wird die folgende Frage fokussiert: Was können Kompetenzraster im Rahmen selbstgesteuerten Lernens leisten? (DIPF/Orig.)