158 resultados para HERRERA, NICOLAS
Resumo:
The only effective and scalable way to regulate the actions of people on the internet is through online intermediaries. These are the institutions that facilitate communication: internet service providers, search engines, content hosts, and social networks. Governments, private firms, and civil society organisations are increasingly seeking to influence these intermediaries to take more responsibility to prevent or respond to IP infringements. Around the world, intermediaries are increasingly subject to a variety of obligations to help enforce IP rights, ranging from informal social and governmental pressure, to industry codes and private negotiated agreements, to formal legislative schemes. This paper provides an overview of this emerging shift in regulatory approaches, away from legal liability and towards increased responsibilities for intermediaries. This shift straddles two different potential futures: an optimistic set of more effective, more efficient mechanisms for regulating user behaviour, and a dystopian vision of rule by algorithm and private power, without the legitimising influence of the rule of law.
Resumo:
The rise of the peer economy poses complex new regulatory challenges for policy-makers. The peer economy, typified by services like Uber and AirBnB, promises substantial productivity gains through the more efficient use of existing resources and a marked reduction in regulatory overheads. These services are rapidly disrupting existing established markets, but the regulatory trade-offs they present are difficult to evaluate. In this paper, we examine the peer economy through the context of ride-sharing and the ongoing struggle over regulatory legitimacy between the taxi industry and new entrants Uber and Lyft. We first sketch the outlines of ride-sharing as a complex regulatory problem, showing how questions of efficiency are necessarily bound up in questions about levels of service, controls over pricing, and different approaches to setting, upholding, and enforcing standards. We outline the need for data-driven policy to understand the way that algorithmic systems work and what effects these might have in the medium to long term on measures of service quality, safety, labour relations, and equality. Finally, we discuss how the competition for legitimacy is not primarily being fought on utilitarian grounds, but is instead carried out within the context of a heated ideological battle between different conceptions of the role of the state and private firms as regulators. We ultimately argue that the key to understanding these regulatory challenges is to develop better conceptual models of the governance of complex systems by private actors and the available methods the state has of influencing their actions. These struggles are not, as is often thought, struggles between regulated and unregulated systems. The key to understanding these regulatory challenges is to better understand the important regulatory work carried out by powerful, centralised private firms – both the incumbents of existing markets and the disruptive network operators in the peer-economy.
Resumo:
Copyright was once one of the more obscure areas of law. It applied primarily to resolve disputes between rival publishers, and there was a time, not too long ago, when ordinary people gave it no thought. Copyright disputes were like subatomic particles: everyone knew that they existed, but nobody had ever seen one. In the digital age, however, copyright has become a heated, passionate, bloody battleground. The 'copyright wars' now pitch readers against authors, pirates against publishers, and content owners against communications providers. Everyone has heard a movie producer decry the rampant infringement of streaming sites, or a music executive suggest that BitTorrent is the end of civilisation as we know it. But everyone infringes copyright on an almost constant basis - streaming amateur videos with a soundtrack that isn't quite licensed, filesharing mp3s, copying LOLcat pictures from Facebook, posting pictures on Pinterest without permission, and so on - and most know full well they're in breach of the law.
Resumo:
Background and Purpose Randomized trials have demonstrated reduced morbidity and mortality with stroke unit care; however, the effect on length of stay, and hence the economic benefit, is less well-defined. In 2001, a multidisciplinary stroke unit was opened at our institution. We observed whether a stroke unit reduces length of stay and in-hospital case fatality when compared to admission to a general neurology/medical ward. Methods A retrospective study of 2 cohorts in the Foothills Medical Center in Calgary was conducted using administrative databases. We compared a cohort of stroke patients managed on general neurology/medical wards before 2001, with a similar cohort of stroke patients managed on a stroke unit after 2003. The length of stay was dichotomized after being centered to 7 days and the Charlson Index was dichotomized for analysis. Multivariable logistic regression was used to compare the length of stay and case fatality in 2 cohorts, adjusted for age, gender, and patient comorbid conditions defined by the Charlson Index. Results Average length of stay for patients on a stroke unit (n=2461) was 15 days vs 19 days for patients managed on general neurology/medical wards (n=1567). The proportion of patients with length of stay >7 days on general neurology/medical wards was 53.8% vs 44.4% on the stroke unit (difference 9.4%; P<0.0001). The adjusted odds of a length of stay >7 days was reduced by 30% (P<0.0001) on a stroke unit compared to general neurology/medical wards. Overall in-hospital case fatality was reduced by 4.5% with stroke unit care. Conclusions We observed a reduced length of stay and reduced in-hospital case-fatality in a stroke unit compared to general neurology/medical wards.
Resumo:
It’s the stuff of nightmares: your intimate images are leaked and posted online by somebody you thought you could trust. But in Australia, victims often have no real legal remedy for this kind of abuse. This is the key problem of regulating the internet. Often, speech we might consider abusive or offensive isn’t actually illegal. And even when the law technically prohibits something, enforcing it directly against offenders can be difficult. It is a slow and expensive process, and where the offender or the content is overseas, there is virtually nothing victims can do. Ultimately, punishing intermediaries for content posted by third parties isn’t helpful. But we do need to have a meaningful conversation about how we want our shared online spaces to feel. The providers of these spaces have a moral, if not legal, obligation to facilitate this conversation.
Resumo:
Background Aquatic exercise has been widely used for rehabilitation and functional recovery due to its physical and physiological benefits. However, there is a high variability in reporting on the muscle activity from surface electromyographic (sEMG) signals. The aim of this study is to present an updated review of the literature on the state of the art of muscle activity recorded using sEMG during activities and exercise performed by humans in water. Methods A literature search was performed to identify studies of aquatic exercise movement. Results Twenty-one studies were selected for critical appraisal. Sample size, functional tasks analyzed, and muscles recorded were studied for each paper. The clinical contribution of the paper was evaluated. Conclusions Muscle activity tends to be lower in water-based compared to land-based activity; however more research is needed to understand why. Approaches from basic and applied sciences could support the understanding of relevant aspects for clinical practice.
Resumo:
PURPOSE: The purpose of the present study was to analyze the neuromuscular responses during the performance of a sit to stand [STS] task in water and on dry land. SCOPE: 10 healthy subjects, five males and five females were recruited for study. Surface electromyography sEMG was used for lower limb and trunk muscles maximal voluntarty contraction [MVC] and during the STS task. RESULTS: Muscle activity was significantly higher on dry land than in water normalized signals by MVC from the quadriceps-vastus medialis [17.3%], the quadriceps - rectus femoris [5.3%], the long head of the biceps femoris [5.5%], the tibialis anterior [13.9%], the gastrocnemius medialis [3.4%], the soleus [6.2%]. However, the muscle activity was higher in water for the rectus abdominis [-26.6%] and the erector spinae [-22.6%]. CONCLUSIONS: This study for the first time describes the neuromuscular responses in healthy subjects during the performance of the STS task in water. The differences in lower limb and trunk muscle activity should be considered when using the STS movement in aquatic rehabilitation.
Resumo:
Australian copyright law is broken, and the Australian Government isn’t moving quickly to fix it. Borrowing, quoting, and homage are fundamental to the creative process. This is how people are inspired to create. Under Australian law, though, most borrowing is copyright infringement, unless it is licensed or falls within particular, narrow categories. This year marks five years since the very real consequences of Australia’s restrictive copyright law for Australian artists were made clear in the controversial litigation over Men at Work’s 1981 hit Down Under. The band lost a court case in 2010 that found that the song’s iconic flute riff copied some of the 1934 children’s song Kookaburra Sits in the Old Gumtree. A new book and documentary tell us more about the story behind the anthem – and the court case. The book, Down Under by Trevor Conomy, and the documentary, You Better Take Cover by Harry Hayes, bring renewed interest and new perspectives on the tragic story.
Resumo:
The year is still young, but this week a judgement was handed down in what may well be the biggest music case of 2015. Marvin Gaye’s children have won a copyright law suit against Robin Thicke (no stranger to controversy) and Pharrell Williams for the song Blurred Lines. The 2013 hit was found to have infringed Gaye’s musical copyright in Got To Give It Up. A jury in the US awarded damages of nearly US$7.4 million – nearly half of the song’s US$16.6 million takings to date.
Resumo:
In his 1987 book, The Media Lab: Inventing the Future at MIT, Stewart Brand provides an insight into the visions of the future of the media in the 1970s and 1980s. 1 He notes that Nicolas Negroponte made a compelling case for the foundation of a media laboratory at MIT with diagrams detailing the convergence of three sectors of the media—the broadcast and motion picture industry; the print and publishing industry; and the computer industry. Stewart Brand commented: ‘If Negroponte was right and communications technologies really are converging, you would look for signs that technological homogenisation was dissolving old boundaries out of existence, and you would expect an explosion of new media where those boundaries used to be’. Two decades later, technology developers, media analysts and lawyers have become excited about the latest phase of media convergence. In 2006, the faddish Time Magazine heralded the arrival of various Web 2.0 social networking services: You can learn more about how Americans live just by looking at the backgrounds of YouTube videos—those rumpled bedrooms and toy‐strewn basement rec rooms—than you could from 1,000 hours of network television. And we didn’t just watch, we also worked. Like crazy. We made Facebook profiles and Second Life avatars and reviewed books at Amazon and recorded podcasts. We blogged about our candidates losing and wrote songs about getting dumped. We camcordered bombing runs and built open‐source software. America loves its solitary geniuses—its Einsteins, its Edisons, its Jobses—but those lonely dreamers may have to learn to play with others. Car companies are running open design contests. Reuters is carrying blog postings alongside its regular news feed. Microsoft is working overtime to fend off user‐created Linux. We’re looking at an explosion of productivity and innovation, and it’s just getting started, as millions of minds that would otherwise have drowned in obscurity get backhauled into the global intellectual economy. The magazine announced that Time’s Person of the Year was ‘You’, the everyman and everywoman consumer ‘for seizing the reins of the global media, for founding and framing the new digital democracy, for working for nothing and beating the pros at their own game’. This review essay considers three recent books, which have explored the legal dimensions of new media. In contrast to the unbridled exuberance of Time Magazine, this series of legal works displays an anxious trepidation about the legal ramifications associated with the rise of social networking services. In his tour de force, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet, Daniel Solove considers the implications of social networking services, such as Facebook and YouTube, for the legal protection of reputation under privacy law and defamation law. Andrew Kenyon’s edited collection, TV Futures: Digital Television Policy in Australia, explores the intersection between media law and copyright law in the regulation of digital television and Internet videos. In The Future of the Internet and How to Stop It, Jonathan Zittrain explores the impact of ‘generative’ technologies and ‘tethered applications’—considering everything from the Apple Mac and the iPhone to the One Laptop per Child programme.
Resumo:
There’s nothing new about this recipe for success: toss in high-stress scenarios, flavour generously with competitive chefs, and garnish with a panel of celebrity judges. With all major broadcasters in the country now dishing up some form of reality cooking programme, Australians could be forgiven for having lost any expectation of original TV material. But that didn’t stop Channel Seven from taking Channel Nine to court last week, arguing its copyright in My Kitchen Rules had been infringed with Nine’s latest prime-time effort, The Hotplate. After the first few episodes went to air, Seven asked for an injunction to stop Nine from broadcasting any more episodes of the reality show. So let’s look at some common confusions about copyright law and how it relates to reality television. Because in this context, copyright infringement isn’t about shows sharing major similarities, or about protecting ideas, but rather the expression of these ideas in the final product. Still, stretching copyright law to protect the “vibe” of a work isn’t good for artists, TV producers or viewers: copyright was designed to nurture creativity, not stifle it.
Resumo:
The makers of Dallas Buyers Club have been dealt a blow in their attempt to extract payment from people alleged to have downloaded illegal copies of the movie. Voltage Pictures, which owns Dallas Buyers Club, has been trying to identify over 4,700 iiNet subscribers who it alleges downloaded illicit copies of the movie. Earlier this year, the Federal Court agreed that iiNet should hand over subscriber details, but warned that any letter sent to account holders must first be approved by the court to protect consumers from abuse of the legal system. In a win for consumer protection, the Federal Court has now rejected Voltage’s draft letters, criticising Voltage’s attempts to avoid explaining what fee it would demand.
Resumo:
Abnormally high price spikes in spot electricity markets represent a significant risk to market participants. As such, a literature has developed that focuses on forecasting the probability of such spike events, moving beyond simply forecasting the level of price. Many univariate time series models have been proposed to dealwith spikes within an individual market region. This paper is the first to develop a multivariate self-exciting point process model for dealing with price spikes across connected regions in the Australian National Electricity Market. The importance of the physical infrastructure connecting the regions on the transmission of spikes is examined. It is found that spikes are transmitted between the regions, and the size of spikes is influenced by the available transmission capacity. It is also found that improved risk estimates are obtained when inter-regional linkages are taken into account.
Resumo:
Background Context There are differences in definitions of end plate lesions (EPLs), often referred to as Schmorl’s nodes, that may, to some extent, account for the large range of reported prevalence (3.8 to 76%). Purpose To develop a technique to measure the size, prevalence and location of EPLs in a consistent manner. Study Design/Setting This study proposed a method using a detection algorithm which was applied to five adolescent females (average age 15.1 years, range 13.0 to 19.2 years) with idiopathic scoliosis (average major Cobb angle 60°, range 55 to 67°). Methods Existing low-dose, computed tomography scans were segmented semi-automatically to extract 3D morphology of each vertebral endplate. Any remaining attachments to the posterior elements of adjacent vertebrae or endplates were then manually sectioned. An automatic algorithm was used to determine the presence and position of EPLs. Results EPLs were identified in 15 of the 170 (8.8%) endplates analysed with an average depth of 3.1mm. 11/15 of the EPLs were seen in the lumbar spine. The algorithm was found to be most sensitive to changes in the minimum EPL gradient at the edges of the EPL. Conclusions This study describes an imaging analysis technique for consistent measurement of the prevalence, location and size of EPLs. The technique can be used to analyse large populations without observer errors in EPL definitions.
Resumo:
INTRODUCTION There is a large range in the reported prevalence of end plate lesions (EPLs), sometimes referred to as Schmorl's nodes in the general population (3.8-76%). One possible reason for this large range is the differences in definitions used by authors. Previous research has suggested that EPLs may potentially be a primary disturbance of growth plates that leads to the onset of scoliosis. The aim of this study was to develop a technique to measure the size, prevalence and location of EPLs on Computed Tomography (CT) images of scoliosis patients in a consistent manner. METHODS A detection algorithm was developed and applied to measure EPLs for five adolescent females with idiopathic scoliosis (average age 15.1 years, average major Cobb 60°). In this algorithm, the EPL definition was based on the lesion depth, the distance from the edge of the vertebral body and the gradient of the lesion edge. Existing low-dose, CT scans of the patients' spines were segmented semi-automatically to extract 3D vertebral endplate morphology. Manual sectioning of any attachments between posterior elements of adjacent vertebrae and, if necessary, endplates was carried out before the automatic algorithm was used to determine the presence and position of EPLs. RESULTS EPLs were identified in 15 of the 170 (8.8%) endplates analysed with an average depth of 3.1mm. 73% of the EPLs were seen in the lumbar spines (11/15). A sensitivity study demonstrated that the algorithm was most sensitive to changes in the minimum gradient required at the lesion edge. CONCLUSION An imaging analysis technique for consistent measurement of the prevalence, location and size of EPLs on CT images has been developed. Although the technique was tested on scoliosis patients, it can be used to analyse other populations without observer errors in EPL definitions.