983 resultados para Copyright Clearance Center
Resumo:
The question can no longer just be whether “art and social practice” or creative forms of activism are part of larger neo liberal agenda nor if they are potentially radical in their conception, delivery or consumption. The question also becomes: what are the effects of social practice art and design for the artists, institutions, and the publics they elicit in public and private spaces; that is, how can we consider such artworks differently? I argue the dilution of social practices’ potentially radical interventions into cultural processes and their absorption into larger neo liberal agendas limits how, as Jacques Rancière might argue, they can intervene in the “distribution of the sensible.” I will use a case study example from The Center for Tactical Magic, an artist group from the San Francisco Bay Area.
Resumo:
Metaphors are a common instrument of human cognition, activated when seeking to make sense of novel and abstract phenomena. In this article we assess some of the values and assumptions encoded in the framing of the term big data, drawing on the framework of conceptual metaphor. We first discuss the terms data and big data and the meanings historically attached to them by different usage communities and then proceed with a discourse analysis of Internet news items about big data. We conclude by characterizing two recurrent framings of the concept: as a natural force to be controlled and as a resource to be consumed.
Resumo:
The occurrence of extreme water level events along low-lying, highly populated and/or developed coastlines can lead to devastating impacts on coastal infrastructure. Therefore it is very important that the probabilities of extreme water levels are accurately evaluated to inform flood and coastal management and for future planning. The aim of this study was to provide estimates of present day extreme total water level exceedance probabilities around the whole coastline of Australia, arising from combinations of mean sea level, astronomical tide and storm surges generated by both extra-tropical and tropical storms, but exclusive of surface gravity waves. The study has been undertaken in two main stages. In the first stage, a high-resolution (~10 km along the coast) hydrodynamic depth averaged model has been configured for the whole coastline of Australia using the Danish Hydraulics Institute’s Mike21 modelling suite of tools. The model has been forced with astronomical tidal levels, derived from the TPX07.2 global tidal model, and meteorological fields, from the US National Center for Environmental Prediction’s global reanalysis, to generate a 61-year (1949 to 2009) hindcast of water levels. This model output has been validated against measurements from 30 tide gauge sites around Australia with long records. At each of the model grid points located around the coast, time series of annual maxima and the several highest water levels for each year were derived from the multi-decadal water level hindcast and have been fitted to extreme value distributions to estimate exceedance probabilities. Stage 1 provided a reliable estimate of the present day total water level exceedance probabilities around southern Australia, which is mainly impacted by extra-tropical storms. However, as the meteorological fields used to force the hydrodynamic model only weakly include the effects of tropical cyclones the resultant water levels exceedance probabilities were underestimated around western, northern and north-eastern Australia at higher return periods. Even if the resolution of the meteorological forcing was adequate to represent tropical cyclone-induced surges, multi-decadal periods yielded insufficient instances of tropical cyclones to enable the use of traditional extreme value extrapolation techniques. Therefore, in the second stage of the study, a statistical model of tropical cyclone tracks and central pressures was developed using histroic observations. This model was then used to generate synthetic events that represented 10,000 years of cyclone activity for the Australia region, with characteristics based on the observed tropical cyclones over the last ~40 years. Wind and pressure fields, derived from these synthetic events using analytical profile models, were used to drive the hydrodynamic model to predict the associated storm surge response. A random time period was chosen, during the tropical cyclone season, and astronomical tidal forcing for this period was included to account for non-linear interactions between the tidal and surge components. For each model grid point around the coast, annual maximum total levels for these synthetic events were calculated and these were used to estimate exceedance probabilities. The exceedance probabilities from stages 1 and 2 were then combined to provide a single estimate of present day extreme water level probabilities around the whole coastline of Australia.
Resumo:
language (such as C++ and Java). The model used allows to insert watermarks on three “orthogonal” levels. For the first level, watermarks are injected into objects. The second level watermarking is used to select proper variants of the source code. The third level uses transition function that can be used to generate copies with different functionalities. Generic watermarking schemes were presented and their security discussed.
Resumo:
This paper presents a novel place recognition algorithm inspired by the recent discovery of overlapping and multi-scale spatial maps in the rodent brain. We mimic this hierarchical framework by training arrays of Support Vector Machines to recognize places at multiple spatial scales. Place match hypotheses are then cross-validated across all spatial scales, a process which combines the spatial specificity of the finest spatial map with the consensus provided by broader mapping scales. Experiments on three real-world datasets including a large robotics benchmark demonstrate that mapping over multiple scales uniformly improves place recognition performance over a single scale approach without sacrificing localization accuracy. We present analysis that illustrates how matching over multiple scales leads to better place recognition performance and discuss several promising areas for future investigation.
Resumo:
The formation of clearly separated vertical graphenenanosheets on silicon nanograss support is demonstrated. The plasma-enabled, two-stage mask-free process produced self-organized vertical graphenes of a few carbon layers (as confirmed by advanced microanalysis), prominently oriented in the substrate center–substrate edge direction. It is shown that the width of the alignment zone depends on the substrate conductivity, and thus the electric field in the vicinity of the growth surface is responsible for the graphene alignment. This finding is confirmed by the Monte Carlo simulations of the ion flux distribution in the silicon nanograss pattern.
Resumo:
Albumin binds low–molecular-weight molecules, including proteins and peptides, which then acquire its longer half-life, thereby protecting the bound species from kidney clearance. We developed an experimental method to isolate albumin in its native state and to then identify [mass spectrometry (MS) sequencing] the corresponding bound low–molecular-weight molecules. We used this method to analyze pooled sera from a human disease study set (high-risk persons without cancer, n= 40; stage I ovarian cancer, n = 30; stage III ovarian cancer, n = 40) to demonstrate the feasibility of this approach as a discovery method. Methods Albumin was isolated by solid-phase affinity capture under native binding and washing conditions. Captured albumin-associated proteins and peptides were separated by gel electrophoresis and subjected to iterative MS sequencing by microcapillary reversed-phase tandem MS. Selected albumin-bound protein fragments were confirmed in human sera by Western blotting and immunocompetition. Results In total, 1208 individual protein sequences were predicted from all 3 pools. The predicted sequences were largely fragments derived from proteins with diverse biological functions. More than one third of these fragments were identified by multiple peptide sequences, and more than one half of the identified species were in vivo cleavage products of parent proteins. An estimated 700 serum peptides or proteins were predicted that had not been reported in previous serum databases. Several proteolytic fragments of larger molecules that may be cancer-related were confirmed immunologically in blood by Western blotting and peptide immunocompetition. BRCA2, a 390-kDa low-abundance nuclear protein linked to cancer susceptibility, was represented in sera as a series of specific fragments bound to albumin. Conclusion Carrier-protein harvesting provides a rich source of candidate peptides and proteins with potential diverse tissue and cellular origins that may reflect important disease-related information.
Resumo:
INTRODUCTION Calculating segmental (vertebral level-by-level) torso masses in Adolescent Idiopathic Scoliosis (AIS) patients allows the gravitational loading on the scoliotic spine during relaxed standing to be estimated. METHODS Existing low dose CT scans were used to calculate vertebral level-by-level torso masses and joint moments occurring in the spine for a group of female AIS patients with right-sided thoracic curves. Image processing software, ImageJ (v1.45 NIH USA) was used to reconstruct the torso segments and subsequently measure the torso volume and mass corresponding to each vertebral level. Body segment masses for the head, neck and arms were taken from published anthropometric data. Intervertebral joint moments at each vertebral level were found by summing each of the torso segment masses above the required joint and multiplying it by the perpendicular distance to the centre of the disc. RESULTS AND DISCUSSION Twenty patients were included in this study with a mean age of 15.0±2.7 years and a mean Cobb angle 52±5.9°. The mean total trunk mass, as a percentage of total body mass, was 27.8 (SD 0.5) %. Mean segmental torso mass increased inferiorly from 0.6kg at T1 to 1.5kg at L5. The coronal plane joint moments during relaxed standing were typically 5-7Nm at the apex of the curve (Figure 1), with the highest apex joint of 7Nm. CT scans were performed in the supine position and curve magnitudes are known to be 7-10° smaller than those measured in standing [1]. Therefore joint moments produced by gravity will be greater than those calculated here. CONCLUSIONS Coronal plane joint moments as high as 7Nm can occur during relaxed standing in scoliosis patients, which may help to explain the mechanics of AIS progression. The body mass distributions calculated in this study can be used to estimate joint moments derived using other imaging modalities such as MRI and subsequently determine if a relationship exists between joint moments and progressive vertebral deformity.
Resumo:
Adolescent idiopathic scoliosis (AIS) is a spinal deformity, which may require surgical correction by attaching rods to the patient’s spine using screws inserted into the vertebrae. Complication rates for deformity correction surgery are unacceptably high. Determining an achievable correction without overloading the adjacent spinal tissues or implants requires an understanding of the mechanical interaction between these components. We have developed novel patient specific modelling software to create individualized finite element models (FEM) representing the thoracolumbar spine and ribcage of scoliosis patients. We are using these models to better understand the biomechanics of spinal deformity correction.
Resumo:
Internet and its widespread usage for multimedia document distribution put the copyright issue in a complete new setting. Multimedia documents, specifically those installed on a web page, are no longer passive as they typically include active applets. Copyright protection safeguards the intellectual property (IP) of multimedia documents, which are either sold or distributed free of charge. In this Chapter, the basic tools for copyright protection are discussed. First, general concepts and the vocabulary used in copyright protection of multimedia documents are discussed. Later, taxonomy of watermarking and fingerprinting techniques are studied. This part is concluded by a review of the literature dealing with IP security. The main part of the chapter discusses the generic watermarking scheme and illustrates it on three specific examples: collusion-free watermarking, spread spectrum watermarking, and software fingerprinting. Future trends and conclusions close the chapter.
Resumo:
Cells are the fundamental building block of plant based food materials and many of the food processing born structural changes can fundamentally be derived as a function of the deformations of the cellular structure. In food dehydration the bulk level changes in porosity, density and shrinkage can be better explained using cellular level deformations initiated by the moisture removal from the cellular fluid. A novel approach is used in this research to model the cell fluid with Smoothed Particle Hydrodynamics (SPH) and cell walls with Discrete Element Methods (DEM), that are fundamentally known to be robust in treating complex fluid and solid mechanics. High Performance Computing (HPC) is used for the computations due to its computing advantages. Comparing with the deficiencies of the state of the art drying models, the current model is found to be robust in replicating drying mechanics of plant based food materials in microscale.
Resumo:
Social Networks (SN) users have various privacy requirements to protect their information; to address this issue, a six-stage thematic analysis of scholarly articles related to SN user privacy concerns were synthesized. Then this research combines mixed methods research employing the strengths of quantitative and qualitative research to investigate general SN users, and thus construct a new set of ?ve primary and Twenty-?ve secondary SN user privacy requirements. Such an approach has been rarely used to examine the privacy requirements. Factor analysis results show superior agreement with theoretical predictions and signi?cant improvement over previous alternative models of SN user privacy requirements. This research presented here has the potential to provide for the development of more sophisticated privacy controls which will increase the ability of SN users to: specify their rights in SNs and to determine the protection of their own SN data.
Resumo:
Organisations are constantly seeking new ways to improve operational efficiencies. This research study investigates a novel way to identify potential efficiency gains in business operations by observing how they are carried out in the past and then exploring better ways of executing them by taking into account trade-offs between time, cost and resource utilisation. This paper demonstrates how they can be incorporated in the assessment of alternative process execution scenarios by making use of a cost environment. A genetic algorithm-based approach is proposed to explore and assess alternative process execution scenarios, where the objective function is represented by a comprehensive cost structure that captures different process dimensions. Experiments conducted with different variants of the genetic algorithm evaluate the approach's feasibility. The findings demonstrate that a genetic algorithm-based approach is able to make use of cost reduction as a way to identify improved execution scenarios in terms of reduced case durations and increased resource utilisation. The ultimate aim is to utilise cost-related insights gained from such improved scenarios to put forward recommendations for reducing process-related cost within organisations.
Resumo:
This paper is concerned with how a localised and energy-constrained robot can maximise its time in the field by taking paths and tours that minimise its energy expenditure. A significant component of a robot's energy is expended on mobility and is a function of terrain traversability. We estimate traversability online from data sensed by the robot as it moves, and use this to generate maps, explore and ultimately converge on minimum energy tours of the environment. We provide results of detailed simulations and parameter studies that show the efficacy of this approach for a robot moving over terrain with unknown traversability as well as a number of a priori unknown hard obstacles.
Resumo:
We propose and evaluate a novel methodology to identify the rolling shutter parameters of a real camera. We also present a model for the geometric distortion introduced when a moving camera with a rolling shutter views a scene. Unlike previous work this model allows for arbitrary camera motion, including accelerations, is exact rather than a linearization and allows for arbitrary camera projection models, for example fisheye or panoramic. We show the significance of the errors introduced by a rolling shutter for typical robot vision problems such as structure from motion, visual odometry and pose estimation.