906 resultados para display rules
Resumo:
This article provides a comprehensive overview of the regulations on e-commerce protection rules in China and the European Union. It starts by giving a general overview of different approaches towards consumer protection in e-commerce. This article then scrutinizes the current legal system in China by mainly focusing on SAIC’s “Interim Measures for the Administration of Online Commodity Trading and Relevant Service Activities”. The subsequent chapter covers the supervision of consumer protection in e-commerce in China, which covers both the regulatory objects of online commodity trading and the applied regulatory mechanisms. While the regulatory objects include operating agents, operating objects, operating behavior, electronic contracts, intellectual property and consumer protection, the regulatory mechanisms for e-commerce in China combines market mechanism and industry self-discipline under the government’s administrative regulation. Further, this article examines the current European legal system in online commodity trading. It outlines the aim and the scope of EU legislation in the respective field. Subsequently, the paper describes the European approach towards the supervision of consumer protection in e-commerce. As there is no central EU agency for consumer protection in e-commerce transactions, the EU stipulates a framework for Member States’ institutions, thereby creating a European supervisory network of Member States’ institutions and empowers private consumer organisations to supervise the market on their behalf. Moreover, the EU encourages the industry to self- or co-regulate e-commerce by providing incentives. Consequently, this article concludes that consumer protection may be achieved by different means and different systems. However, even though at first glance the Chinese and the European system appear to differ substantially, a closer look reveals tendencies of convergence between the two systems.
Resumo:
On 3 April 2012, the Spanish Supreme Court issued a major ruling in favour of the Google search engine, including its ‘cache copy’ service: Sentencia n.172/2012, of 3 April 2012, Supreme Court, Civil Chamber.* The importance of this ruling lies not so much in the circumstances of the case (the Supreme Court was clearly disgusted by the claimant’s ‘maximalist’ petitum to shut down the whole operation of the search engine), but rather on the court going beyond the text of the Copyright Act into the general principles of the law and case law, and especially on the reading of the three-step test (in Art. 40bis TRLPI) in a positive sense so as to include all these principles. After accepting that none of the limitations listed in the Spanish Copyright statute (TRLPI) exempted the unauthorized use of fragments of the contents of a personal website through the Google search engine and cache copy service, the Supreme Court concluded against infringement, based on the grounds that the three-step test (in Art. 40bis TRLPI) is to be read not only in a negative manner but also in a positive sense so as to take into account that intellectual property – as any other kind of property – is limited in nature and must endure any ius usus inocui (harmless uses by third parties) and must abide to the general principles of the law, such as good faith and prohibition of an abusive exercise of rights (Art. 7 Spanish Civil Code).The ruling is a major success in favour of a flexible interpretation and application of the copyright statutes, especially in the scenarios raised by new technologies and market agents, and in favour of using the three-step test as a key tool to allow for it.
Resumo:
Enforcement of copyright online and fighting online “piracy” is a high priority on the EU agenda. Private international law questions have recently become some of the most challenging issues in this area. Internet service providers are still uncertain how the Brussels I Regulation (Recast) provisions would apply in EU-wide copyright infringement cases and in which country they can be sued for copyright violations. Meanwhile, because of the territorial approach that still underlies EU copyright law, right holders are unable to acquire EU-wide relief for copyright infringements online. This article first discusses the recent CJEU rulings in the Pinckney and Hejduk cases and argues that the “access approach” that the Court adopted for solving jurisdiction questions could be quite reasonable if it is applied with additional legal measures at the level of substantive law, such as the targeting doctrine. Secondly, the article explores the alternatives to the currently established lex loci protectionis rule that would enable right holders to get EU-wide remedies under a single applicable law. In particular, the analysis focuses on the special applicable law rule for ubiquitous copyright infringements, as suggested by the CLIP Group, and other international proposals.
Resumo:
We use electronic communication networks for more than simply traditional telecommunications: we access the news, buy goods online, file our taxes, contribute to public debate, and more. As a result, a wider array of privacy interests is implicated for users of electronic communications networks and services. . This development calls into question the scope of electronic communications privacy rules. This paper analyses the scope of these rules, taking into account the rationale and the historic background of the European electronic communications privacy framework. We develop a framework for analysing the scope of electronic communications privacy rules using three approaches: (i) a service-centric approach, (ii) a data-centric approach, and (iii) a value-centric approach. We discuss the strengths and weaknesses of each approach. The current e-Privacy Directive contains a complex blend of the three approaches, which does not seem to be based on a thorough analysis of their strengths and weaknesses. The upcoming review of the directive announced by the European Commission provides an opportunity to improve the scoping of the rules.
Resumo:
In order to display a homogeneous image using multiple projectors, differences in the projected intensities must be compensated. In this paper, we present novel approaches to combine and extend existing techniques for edge blending and luminance harmonization to achieve a detailed luminance control. Furthermore, we apply techniques for improving the contrast ratio of multi-segmented displays also to the black offset correction. We also present a simple scheme to involve the displayed context in the correction process to dynamically improve the contrast in brighter images. In addition, we present a metric to evaluate the different methods and their influence on the visual quality.
Resumo:
Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.
Resumo:
We consider collective decision problems given by a profile of single-peaked preferences defined over the real line and a set of pure public facilities to be located on the line. In this context, Bochet and Gordon (2012) provide a large class of priority rules based on efficiency, object-population monotonicity and sovereignty. Each such rule is described by a fixed priority ordering among interest groups. We show that any priority rule which treats agents symmetrically — anonymity — respects some form of coherence across collective decision problems — reinforcement — and only depends on peak information — peakonly — is a weighted majoritarian rule. Each such rule defines priorities based on the relative size of the interest groups and specific weights attached to locations. We give an explicit account of the richness of this class of rules.
Resumo:
Population coding is widely regarded as a key mechanism for achieving reliable behavioral decisions. We previously introduced reinforcement learning for population-based decision making by spiking neurons. Here we generalize population reinforcement learning to spike-based plasticity rules that take account of the postsynaptic neural code. We consider spike/no-spike, spike count and spike latency codes. The multi-valued and continuous-valued features in the postsynaptic code allow for a generalization of binary decision making to multi-valued decision making and continuous-valued action selection. We show that code-specific learning rules speed up learning both for the discrete classification and the continuous regression tasks. The suggested learning rules also speed up with increasing population size as opposed to standard reinforcement learning rules. Continuous action selection is further shown to explain realistic learning speeds in the Morris water maze. Finally, we introduce the concept of action perturbation as opposed to the classical weight- or node-perturbation as an exploration mechanism underlying reinforcement learning. Exploration in the action space greatly increases the speed of learning as compared to exploration in the neuron or weight space.
Resumo:
Objective: To determine how a clinician’s background knowledge, their tasks, and displays of information interact to affect the clinician’s mental model. Design: Repeated Measure Nested Experimental Design Population, Sample, Setting: Populations were gastrointestinal/internal medicine physicians and nurses within the greater Houston area. A purposeful sample of 24 physicians and 24 nurses were studied in 2003. Methods: Subjects were randomized to two different displays of two different mock medical records; one that contained highlighted patient information and one that contained non-highlighted patient information. They were asked to read and summarize their understanding of the patients aloud. Propositional analysis was used to understand their comprehension of the patients. Findings: Different mental models were found between physicians and nurses given the same display of information. The information they shared was very minor compared to the variance in their mental models. There was additionally more variance within the nursing mental models than the physician mental models given different displays of the same information. Statistically, there was no interaction effect between the display of information and clinician type. Only clinician type could account for the differences in the clinician comprehension and thus their mental models of the cases. Conclusion: The factors that may explain the variance within and between the clinician models are clinician type, and only in the nursing group, the use of highlighting.
Resumo:
Spike timing dependent plasticity (STDP) is a phenomenon in which the precise timing of spikes affects the sign and magnitude of changes in synaptic strength. STDP is often interpreted as the comprehensive learning rule for a synapse - the "first law" of synaptic plasticity. This interpretation is made explicit in theoretical models in which the total plasticity produced by complex spike patterns results from a superposition of the effects of all spike pairs. Although such models are appealing for their simplicity, they can fail dramatically. For example, the measured single-spike learning rule between hippocampal CA3 and CA1 pyramidal neurons does not predict the existence of long-term potentiation one of the best-known forms of synaptic plasticity. Layers of complexity have been added to the basic STDP model to repair predictive failures, but they have been outstripped by experimental data. We propose an alternate first law: neural activity triggers changes in key biochemical intermediates, which act as a more direct trigger of plasticity mechanisms. One particularly successful model uses intracellular calcium as the intermediate and can account for many observed properties of bidirectional plasticity. In this formulation, STDP is not itself the basis for explaining other forms of plasticity, but is instead a consequence of changes in the biochemical intermediate, calcium. Eventually a mechanism-based framework for learning rules should include other messengers, discrete change at individual synapses, spread of plasticity among neighboring synapses, and priming of hidden processes that change a synapse's susceptibility to future change. Mechanism-based models provide a rich framework for the computational representation of synaptic plasticity.