807 resultados para Freedom of speech.
Resumo:
Self-stabilization is a property of a distributed system such that, regardless of the legitimacy of its current state, the system behavior shall eventually reach a legitimate state and shall remain legitimate thereafter. The elegance of self-stabilization stems from the fact that it distinguishes distributed systems by a strong fault tolerance property against arbitrary state perturbations. The difficulty of designing and reasoning about self-stabilization has been witnessed by many researchers; most of the existing techniques for the verification and design of self-stabilization are either brute-force, or adopt manual approaches non-amenable to automation. In this dissertation, we first investigate the possibility of automatically designing self-stabilization through global state space exploration. In particular, we develop a set of heuristics for automating the addition of recovery actions to distributed protocols on various network topologies. Our heuristics equally exploit the computational power of a single workstation and the available parallelism on computer clusters. We obtain existing and new stabilizing solutions for classical protocols like maximal matching, ring coloring, mutual exclusion, leader election and agreement. Second, we consider a foundation for local reasoning about self-stabilization; i.e., study the global behavior of the distributed system by exploring the state space of just one of its components. It turns out that local reasoning about deadlocks and livelocks is possible for an interesting class of protocols whose proof of stabilization is otherwise complex. In particular, we provide necessary and sufficient conditions – verifiable in the local state space of every process – for global deadlock- and livelock-freedom of protocols on ring topologies. Local reasoning potentially circumvents two fundamental problems that complicate the automated design and verification of distributed protocols: (1) state explosion and (2) partial state information. Moreover, local proofs of convergence are independent of the number of processes in the network, thereby enabling our assertions about deadlocks and livelocks to apply on rings of arbitrary sizes without worrying about state explosion.
Resumo:
During locomotion, turning is a common and recurring event which is largely neglected in the current state-of-the-art ankle-foot prostheses, forcing amputees to use different steering mechanisms for turning, compared to non-amputees. A better understanding of the complexities surrounding lower limb prostheses will lead to increased health and well-being of amputees. The aim of this research is to develop a steerable ankle-foot prosthesis that mimics the human ankle mechanical properties. Experiments were developed to estimate the mechanical impedance of the ankle and the ankles angles during straight walk and step turn. Next, this information was used in the design of a prototype, powered steerable ankle-foot prosthesis with two controllable degrees of freedom. One of the possible approaches in design of the prosthetic robots is to use the human joints’ parameters, especially their impedance. A series of experiments were conducted to estimate the stochastic mechanical impedance of the human ankle when muscles were fully relaxed and co-contracting antagonistically. A rehabilitation robot for the ankle, Anklebot, was employed to provide torque perturbations to the ankle. The experiments were performed in two different configurations, one with relaxed muscles, and one with 10% of maximum voluntary contraction (MVC). Surface electromyography (sEMG) was used to monitor muscle activation levels and these sEMG signals were displayed to subjects who attempted to maintain them constant. Time histories of ankle torques and angles in the lateral/medial (LM) directions, inversion-eversion (IE), and dorsiflexionplantarflexion (DP) were recorded. Linear time-invariant transfer functions between the measured torques and angles were estimated providing an estimate of ankle mechanical impedance. High coherence was observed over a frequency range up to 30 Hz. The main effect of muscle activation was to increase the magnitude of ankle mechanical impedance in all degrees of freedom of the ankle. Another experiment compared the three-dimensional angles of the ankle during step turn and straight walking. These angles were measured to be used for developing the control strategy of the ankle-foot prosthesis. An infrared camera system was used to track the trajectories and angles of the foot and leg. The combined phases of heel strike and loading response, mid stance, and terminal stance and pre-swing were determined and used to measure the average angles at each combined phase. The Range of motion (ROM) in IE increased during turning while ML rotation decreased and DP changed the least. During the turning step, ankle displacement in DP started with similar angles to straight walk and progressively showed less plantarflexion. In IE, the ankle showed increased inversion leaning the body toward the inside of the turn. ML rotation initiated with an increased medial rotation during the step turn relative to the straight walk transitioning to increased lateral rotation at the toe off. A prototype ankle-foot prosthesis capable of controlling both DP and IE using a cable driven mechanism was developed and assessed as part of a feasibility study. The design is capable of reproducing the angles required for straight walk and step turn; generates 712N of lifting force in plantarflexion, and shows passive stiffness comparable to a nonload bearing ankle impedance. To evaluate the performance of the ankle-foot prosthesis, a circular treadmill was developed to mimic human gait during steering. Preliminary results show that the device can appropriately simulate human gait with loading and unloading the ankle joint during the gait in circular paths.
Resumo:
OBJECTIVES: The treatment of recurrent rejection in heart transplant recipients has been a controversial issue for many years. The intent of this retrospective study was to perform a risk-benefit analysis between treatment strategies with bolus steroids only versus anti-thymocyte globulins (RATG; 1.5 mg/kg q 4 days). METHODS: Between 1986 and 1993, 69 of 425 patients (17 male, 52 female; mean age 44 +/- 11 years) who had more than one rejection/patient per month (rej/pt per mo) in the first 3 postoperative months were defined as recurrent rejectors. RESULTS: Repetitive methylprednisolone bolus therapy (70 mg/kg q 3 days) was given in 27 patients (group M; 1.4 +/- 0.2 rej/pt per mo) and RATG therapy for one of the rejection episodes of the 42 remaining patients (group A; 1.5 +/- 0.2 rej/pt per mo). The quality of triple drug immunosuppression in the two study groups was comparable. The rejection-free interval (RFI) following RATG treatment in group A was 21.6 +/- 10 days and 22 +/- 11 in group M. In group M, 3 of 27 patients (11%) had a rejection treatment-related infection (2 bacterial; 1 viral) versus 6 of the 42 patients of group A (14.2%; bacterial 1, viral 5). During postoperative months 3-24, 0.15 +/- 0.12 rej/pat per mo were observed in group M and 0.21 +/- 0.13 rej/pat per mo in group A (n.s.). In this 21-month period cytolytic therapy for rejection was initiated in 8 of the remaining 21 patients of group M (38%) and 15 of the remaining 37 patients of group A (40.5%). The absolute survival and the individual causes of death were not affected by the type of initial treatment of recurrent rejection. The actuarial freedom of graft atherosclerosis is comparable in the two groups with 78% in group A versus 79% in group M free of graft atherosclerosis at 3 years postoperatively. CONCLUSIONS: A comparison of cytolytic therapy versus repeated applications of bolus steroids for treatment of recurrent rejection reveals no significant difference in the long-term patient outcome with respect to the incidence of future rejection episodes and survival.
Resumo:
High density spatial and temporal sampling of EEG data enhances the quality of results of electrophysiological experiments. Because EEG sources typically produce widespread electric fields (see Chapter 3) and operate at frequencies well below the sampling rate, increasing the number of electrodes and time samples will not necessarily increase the number of observed processes, but mainly increase the accuracy of the representation of these processes. This is namely the case when inverse solutions are computed. As a consequence, increasing the sampling in space and time increases the redundancy of the data (in space, because electrodes are correlated due to volume conduction, and time, because neighboring time points are correlated), while the degrees of freedom of the data change only little. This has to be taken into account when statistical inferences are to be made from the data. However, in many ERP studies, the intrinsic correlation structure of the data has been disregarded. Often, some electrodes or groups of electrodes are a priori selected as the analysis entity and considered as repeated (within subject) measures that are analyzed using standard univariate statistics. The increased spatial resolution obtained with more electrodes is thus poorly represented by the resulting statistics. In addition, the assumptions made (e.g. in terms of what constitutes a repeated measure) are not supported by what we know about the properties of EEG data. From the point of view of physics (see Chapter 3), the natural “atomic” analysis entity of EEG and ERP data is the scalp electric field
Resumo:
Audio-visual documents obtained from German TV news are classified according to the IPTC topic categorization scheme. To this end usual text classification techniques are adapted to speech, video, and non-speech audio. For each of the three modalities word analogues are generated: sequences of syllables for speech, “video words” based on low level color features (color moments, color correlogram and color wavelet), and “audio words” based on low-level spectral features (spectral envelope and spectral flatness) for non-speech audio. Such audio and video words provide a means to represent the different modalities in a uniform way. The frequencies of the word analogues represent audio-visual documents: the standard bag-of-words approach. Support vector machines are used for supervised classification in a 1 vs. n setting. Classification based on speech outperforms all other single modalities. Combining speech with non-speech audio improves classification. Classification is further improved by supplementing speech and non-speech audio with video words. Optimal F-scores range between 62% and 94% corresponding to 50% - 84% above chance. The optimal combination of modalities depends on the category to be recognized. The construction of audio and video words from low-level features provide a good basis for the integration of speech, non-speech audio and video.
Resumo:
This article discusses democratic elements in early Islamic sources and in the programs of the Algerian FIS (Front Islamique du Salut) and ANNAHDA in Tunesia. According to historic writings, Islam includes the principles of democratic consensus, consultation, and freedom of opinion, and an understanding that the sources of Islamic jurisdiction are subject to interpretation, that the sharia can be changed, and that religious authorities’ power to issue instructions on worldly matters is limited. These are the type of expectations that fundamentalist parties arouse when they speak of an Islamic caliphate as a state system. Against this background, an examination of the political system proposed until 1992 by the Algerian FIS shows that this system would have resulted in a very restrictive form of Islam. An investigation of the political system of the Tunisian fundamentalist leader Rached al-Ghannouchi reveals that the system he proposes may be designated as an Islamic democracy, since it takes into account separation of powers and pluralism of political parties. The head of state would be subject to the law in the same manner as the people. However, it is no liberal democracy, as he categorically rejects secularism, intends to punish apostates, and is only willing to allow political parties that are based on the religion of Islam. His state would only be a state of those citizens who follow Islam, completely neglecting secularist groups. Social conflicts and unrest are thus predetermined.
Resumo:
The report examines the relationship between day care institutions, schools and so called “parents unfamiliar to education” as well as the relationship between the institutions. With in Danish public and professional discourse concepts like parents unfamiliar to education are usually referring to environments, parents or families with either no or just very restricted experience of education except for the basic school (folkeskole). The “grand old man” of Danish educational research, Prof. Em. Erik Jørgen Hansen, defines the concept as follows: Parents who are distant from or not familiar with education, are parents without tradition of education and by that fact they are not able to contribute constructively in order to back up their own children during their education. Many teachers and pedagogues are not used to that term; they rather prefer concepts like “socially exposed” or “socially disadvantaged” parents or social classes or strata. The report does not only focus on parents who are not capable to support the school achievements of their children, since a low level of education is usually connected with social disadvantage. Such parents are often not capable of understanding and meeting the demands from side of the school when sending their children to school. They lack the competencies or the necessary competence of action. For the moment being much attention is done from side of the Ministries of Education and Social Affairs (recently renamed Ministry of Welfare) in order to create equal possibilities for all children. Many kinds of expertise (directions, counsels, researchers, etc.) have been more than eager to promote recommendations aiming at achieving the ambitious goal: 2015 95% of all young people should complement a full education (classes 10.-12.). Research results are pointing out the importance of increased participation of parents. In other word the agenda is set for ‘parents’ education’. It seems necessary to underline that Danish welfare policy has been changing rather radical. The classic model was an understanding of welfare as social assurance and/or as social distribution – based on social solidarity. The modern model looks like welfare as social service and/or social investment. This means that citizens are changing role – from user and/or citizen to consumer and/or investor. The Danish state is in correspondence with decisions taken by the government investing in a national future shaped by global competition. The new models of welfare – “service” and “investment” – imply severe changes in hitherto known concepts of family life, relationship between parents and children etc. As an example the investment model points at a new implementation of the relationship between social rights and the rights of freedom. The service model has demonstrated that weakness that the access to qualified services in the field of health or education is becoming more and more dependent of the private purchasing power. The weakness of the investment model is that it represents a sort of “The Winner takes it all” – since a political majority is enabled to make agendas in societal fields former protected by the tripartite power and the rights of freedom of the citizens. The outcome of the Danish development seems to be an establishment of a political governed public service industry which on one side are capable of competing on market conditions and on the other are able being governed by contracts. This represents a new form of close linking of politics, economy and professional work. Attempts of controlling education, pedagogy and thereby the population are not a recent invention. In European history we could easily point at several such experiments. The real news is the linking between political priorities and exercise of public activities by economic incentives. By defining visible goals for the public servants, by introducing measurement of achievements and effects, and by implementing a new wage policy depending on achievements and/or effects a new system of accountability is manufactured. The consequences are already perceptible. The government decides to do some special interventions concerning parents, children or youngsters, the public servants on municipality level are instructed to carry out their services by following a manual, and the parents are no longer protected by privacy. Protection of privacy and minority is no longer a valuable argumentation to prevent further interventions in people’s life (health, food, school, etc.). The citizens are becoming objects of investment, also implying that people are investing in their own health, education, and family. This means that investments in changes of life style and development of competences go hand in hand. The below mentioned programmes are conditioned by this shift.
Resumo:
Research and professional practices have the joint aim of re-structuring the preconceived notions of reality. They both want to gain the understanding about social reality. Social workers use their professional competence in order to grasp the reality of their clients, while researchers’ pursuit is to open the secrecies of the research material. Development and research are now so intertwined and inherent in almost all professional practices that making distinctions between practising, developing and researching has become difficult and in many aspects irrelevant. Moving towards research-based practices is possible and it is easily applied within the framework of the qualitative research approach (Dominelli 2005, 235; Humphries 2005, 280). Social work can be understood as acts and speech acts crisscrossing between social workers and clients. When trying to catch the verbal and non-verbal hints of each others’ behaviour, the actors have to do a lot of interpretations in a more or less uncertain mental landscape. Our point of departure is the idea that the study of social work practices requires tools which effectively reveal the internal complexity of social work (see, for example, Adams & Dominelli & Payne 2005, 294 – 295). The boom of qualitative research methodologies in recent decades is associated with much profound the rupture in humanities, which is called the linguistic turn (Rorty 1967). The idea that language is not transparently mediating our perceptions and thoughts about reality, but on the contrary it constitutes it was new and even confusing to many social scientists. Nowadays we have got used to read research reports which have applied different branches of discursive analyses or narratologic or semiotic approaches. Although differences are sophisticated between those orientations they share the idea of the predominance of language. Despite the lively research work of today’s social work and the research-minded atmosphere of social work practice, semiotics has rarely applied in social work research. However, social work as a communicative practice concerns symbols, metaphors and all kinds of the representative structures of language. Those items are at the core of semiotics, the science of signs, and the science which examines people using signs in their mutual interaction and their endeavours to make the sense of the world they live in, their semiosis. When thinking of the practice of social work and doing the research of it, a number of interpretational levels ought to be passed before reaching the research phase in social work. First of all, social workers have to interpret their clients’ situations, which will be recorded in the files. In some very rare cases those past situations will be reflected in discussions or perhaps interviews or put under the scrutiny of some researcher in the future. Each and every new observation adds its own flavour to the mixture of meanings. Social workers have combined their observations with previous experience and professional knowledge, furthermore, the situation on hand also influences the reactions. In addition, the interpretations made by social workers over the course of their daily working routines are never limited to being part of the personal process of the social worker, but are also always inherently cultural. The work aiming at social change is defined by the presence of an initial situation, a specific goal, and the means and ways of achieving it, which are – or which should be – agreed upon by the social worker and the client in situation which is unique and at the same time socially-driven. Because of the inherent plot-based nature of social work, the practices related to it can be analysed as stories (see Dominelli 2005, 234), given, of course, that they are signifying and told by someone. The research of the practices is concentrating on impressions, perceptions, judgements, accounts, documents etc. All these multifarious elements can be scrutinized as textual corpora, but not whatever textual material. In semiotic analysis, the material studied is characterised as verbal or textual and loaded with meanings. We present a contribution of research methodology, semiotic analysis, which has to our mind at least implicitly references to the social work practices. Our examples of semiotic interpretation have been picked up from our dissertations (Laine 2005; Saurama 2002). The data are official documents from the archives of a child welfare agency and transcriptions of the interviews of shelter employees. These data can be defined as stories told by the social workers of what they have seen and felt. The official documents present only fragmentations and they are often written in passive form. (Saurama 2002, 70.) The interviews carried out in the shelters can be described as stories where the narrators are more familiar and known. The material is characterised by the interaction between the interviewer and interviewee. The levels of the story and the telling of the story become apparent when interviews or documents are examined with the use of semiotic tools. The roots of semiotic interpretation can be found in three different branches; the American pragmatism, Saussurean linguistics in Paris and the so called formalism in Moscow and Tartu; however in this paper we are engaged with the so called Parisian School of semiology which prominent figure was A. J. Greimas. The Finnish sociologists Pekka Sulkunen and Jukka Törrönen (1997a; 1997b) have further developed the ideas of Greimas in their studies on socio-semiotics, and we lean on their ideas. In semiotics social reality is conceived as a relationship between subjects, observations, and interpretations and it is seen mediated by natural language which is the most common sign system among human beings (Mounin 1985; de Saussure 2006; Sebeok 1986). Signification is an act of associating an abstract context (signified) to some physical instrument (signifier). These two elements together form the basic concept, the “sign”, which never constitutes any kind of meaning alone. The meaning will be comprised in a distinction process where signs are being related to other signs. In this chain of signs, the meaning becomes diverged from reality. (Greimas 1980, 28; Potter 1996, 70; de Saussure 2006, 46-48.) One interpretative tool is to think of speech as a surface under which deep structures – i.e. values and norms – exist (Greimas & Courtes 1982; Greimas 1987). To our mind semiotics is very much about playing with two different levels of text: the syntagmatic surface which is more or less faithful to the grammar, and the paradigmatic, semantic structure of values and norms hidden in the deeper meanings of interpretations. Semiotic analysis deals precisely with the level of meaning which exists under the surface, but the only way to reach those meanings is through the textual level, the written or spoken text. That is why the tools are needed. In our studies, we have used the semiotic square and the actant analysis. The former is based on the distinctions and the categorisations of meanings, and the latter on opening the plotting of narratives in order to reach the value structures.
Resumo:
Additive manufacturing by melting of metal powders is an innovative method to create one-offs and customized parts. Branches like dentistry, aerospace engineering and tool making were indicated and the manufacturing methods are established. Besides all the advantages, like freedom of design, manufacturing without a tool and the reduction of time-to-market, there are however some disadvantages, such as reproducibility or the surface quality. The surface quality strongly depends on the orientation of the component in the building chamber, the process parameters which are laser power and exposure time, but also on the so-called “hatch”-strategy, which includes the way the laser exposes the solid areas. This paper deals with the investigation and characterization of the surface quality of generated parts produced by SLM. Main process parameters including part orientation, part size and hatch strategies are investigated and monitored. The outcome is a recommendation of suitable hatch strategies depending on desired part properties. This includes metered values and takes into account process stability and reproducibility.
Resumo:
On October 10, 2013, the Chamber of the European Court of Human Rights (ECtHR) handed down a judgment (Delfi v. Estonia) condoning Estonia for a law which, as interpreted, held a news portal liable for the defamatory comments of its users. Amongst the considerations that led the Court to find no violation of freedom of expression in this particular case were, above all, the inadequacy of the automatic screening system adopted by the website and the users’ option to post their comments anonymously (i.e. without need for prior registration via email), which in the Court’s view rendered the protection conferred to the injured party via direct legal action against the authors of the comments ineffective. Drawing on the implications of this (not yet final) ruling, this paper discusses a few questions that the tension between the risk of wrongful use of information and the right to anonymity generates for the development of Internet communication, and examines the role that intermediary liability legislation can play to manage this tension.
Resumo:
Privacy is commonly seen as an instrumental value in relation to negative freedom, human dignity and personal autonomy. Article 8 ECHR, protecting the right to privacy, was originally coined as a doctrine protecting the negative freedom of citizens in vertical relations, that is between citizen and state. Over the years, the Court has extended privacy protection to horizontal relations and has gradually accepted that individual autonomy is an equally important value underlying the right to privacy. However, in most of the recent cases regarding Article 8 ECHR, the Court goes beyond the protection of negative freedom and individual autonomy and instead focuses self-expression, personal development and human flourishing. Accepting this virtue ethical notion, in addition to the traditional Kantian focus on individual autonomy and human dignity, as a core value of Article 8 ECHR may prove vital for the protection of privacy in the age of Big Data.
Resumo:
During the last decades, the virtual world increasingly gained importance and in this context the enforcement of privacy rights became more and more difficult. An important emanation of this trend is the right to be forgotten enshrining the protection of the data subject’s rights over his/her “own” data. Even though the right to be forgotten has been made part of the proposal for a completely revised Data Protection Regulation and has recently been acknowledged by the Court of Justice of the European Union (“Google/Spain” decision), to date, the discussions about the right and especially its implementation with regard to the fundamental right to freedom of expression have remained rather vague and need to be examined in more depth.
Resumo:
In Europe, roughly three regimes apply to the liability of Internet intermediaries for privacy violations conducted by users through their network. These are: the e-Commerce Directive, which, under certain conditions, excludes them from liability; the Data Protection Directive, which imposes a number of duties and responsibilities on providers processing personal data; and the freedom of expression, contained inter alia in the ECHR, which, under certain conditions, grants Internet providers several privileges and freedoms. Each doctrine has its own field of application, but they also have partial overlap. In practice, this creates legal inequality and uncertainty, especially with regard to providers that host online platforms and process User Generated Content.
Resumo:
This study uses survey data to investigate attitudes among Swiss voters to different models offering more freedom of choice in the educational system. There is a clear opposition to the use of taxpayer money to fund private schools, while free choice between public schools seems to appeal to a majority. The opinions appear to be based on a rational calculation of personal utility. For both types of choice, approval rates are lower for middle to high-income groups and individuals with a teaching qualification. Furthermore, residents of small to medium-sized towns are opposed to more school choice. On the support side, approval rates for private school choice are higher among parents of school-age children and residents in urban areas. The results also indicate differences between the country's language regions, attributable to intercultural differences in what people consider the role of the state.
Resumo:
In 2009 Switzerland, for long an apparent beacon of European toleration and neutrality, voted to ban the erection of minarets. Internal religious matters are normally dealt with at the regional or local level – not at the level of the Swiss national parliament, although the state does seek to ensure good order and peaceful relations between different faith communities. Indeed, the freedom of these communities to believe and function publicly is enshrined in law. However, as a matter of national policy, now constitutionally embedded, one religious group, the Muslim group, is not permitted to build their distinctive religious edifice, the minaret. Switzerland may have joined the rest of Europe with respect to engaging the challenge of Islamic presence to European identity and values, but the rejection of a symbol of the presence of one faith – in this case, Islamic – by a society that is otherwise predominantly secular, pluralist, and of Christian heritage, poses significant concerns. How and why did this happen? What are the implications? This paper will discuss some of the issues involved, concluding the ban is by no means irreversible. Tolerant neutrality may yet again be a leitmotif of Swiss culture and not just of foreign policy.