88 resultados para Aplicaciones web
Resumo:
Previous studies suggest that marketing strategy is developed and used to mobilise and configure the actions of firm actors, creating a set of stabilising activities focused on the firm–customer dyad. Destabilising forces precipitated by the Internet and associated digital technologies involving contention and disruption by multiple actors are much less prevalent in the marketing literature. The central point we advance is that rather than marketing strategy being a controlled and stabilising force for firms in their relationships with customers, it can often lead to socially produced spaces where consumers and, importantly, other multiple actors form a social movement to actively attempt to destabilise it and contest its legitimacy. Using an innovative research approach, the findings of this study show how social movements proactively enrol and mobilise a wide range of relevant actors into a network of influence. Critical to this are rhetorical strategies, acting as important levers in attempts to destabilise and delegitimise a dominant firm's marketing strategy.
Resumo:
Food webs are the complex networks of trophic interactions that stoke the metabolic fires of life. To understand what structures these interactions in natural communities, ecologists have developed simple models to capture their main architectural features. However, apparently realistic food webs can be generated by models invoking either predator-prey body-size hierarchies or evolutionary constraints as structuring mechanisms. As a result, this approach has not conclusively revealed which factors are the most important. Here we cut to the heart of this debate by directly comparing the influence of phylogeny and body size on food web architecture. Using data from 13 food webs compiled by direct observation, we confirm the importance of both factors. Nevertheless, phylogeny dominates in most networks. Moreover, path analysis reveals that the size-independent direct effect of phylogeny on trophic structure typically outweighs the indirect effect that could be captured by considering body size alone. Furthermore, the phylogenetic signal is asymmetric: closely related species overlap in their set of consumers far more than in their set of resources. This is at odds with several food web models, which take only the view-point of consumers when assigning interactions. The echo of evolutionary history clearly resonates through current food webs, with implications for our theoretical models and conservation priorities.
Resumo:
A software system, recently developed by the authors for the efficient capturing, editing, and delivery of audio-visual web lectures, was used to create a series of lectures for a first-year undergraduate course in Dynamics. These web lectures were developed to serve as an extra study resource for students attending lectures and not as a replacement. A questionnaire was produced to obtain feedback from students. The overall response was very favorable and numerous requests were made for other lecturers to adopt this technology. Despite the students' approval of this added resource, there was no significant improvement in overall examination performance
Resumo:
A combination of experiments and non-linear finite element analyses are used to investigate the effect of offset web holes on the web crippling strength of cold-formed steel channel sections under the end-two-flange (ETF) loading condition; the cases of both flanges fastened and unfastened to the support are considered. The web holes are located at the mid-depth of the sections, with a horizontal clear distance of the web holes to the near edge of the bearing plate. Finite element analysis results are compared against the laboratory test results; good agreement was obtained in terms of both strength and failure modes. A parametric study was then undertaken to investigate both the effect of the position of holes in the web and the cross-section sizes on the web crippling strength of the channel sections. It was demonstrated that the main factors influencing the web crippling strength are the ratio of the hole depth to the depth of the web, and the ratio of the distance from the edge of the bearing to the flat depth of the web. Design recommendations in the form of web crippling strength reduction factors are proposed in this study.
Resumo:
With the rapid expansion of the internet and the increasing demand on Web servers, many techniques were developed to overcome the servers' hardware performance limitation. Mirrored Web Servers is one of the techniques used where a number of servers carrying the same "mirrored" set of services are deployed. Client access requests are then distributed over the set of mirrored servers to even up the load. In this paper we present a generic reference software architecture for load balancing over mirrored web servers. The architecture was designed adopting the latest NaSr architectural style [1] and described using the ADLARS [2] architecture description language. With minimal effort, different tailored product architectures can be generated from the reference architecture to serve different network protocols and server operating systems. An example product system is described and a sample Java implementation is presented.
Resumo:
A web-service is a remote computational facility which is made available for general use by means of the internet. An orchestration is a multi-threaded computation which invokes remote services. In this paper game theory is used to analyse the behaviour of orchestration evaluations when underlying web-services are unreliable. Uncertainty profiles are proposed as a means of defining bounds on the number of service failures that can be expected during an orchestration evaluation. An uncertainty profile describes a strategic situation that can be analyzed using a zero-sum angel-daemon game with two competing players: an angel a whose objective is to minimize damage to an orchestration and a daemon d who acts in a destructive fashion. An uncertainty profile is assessed using the value of its angel daemon game. It is shown that uncertainty profiles form a partial order which is monotonic with respect to assessment.
Resumo:
We consider the behaviour of a set of services in a stressed web environment where performance patterns may be difficult to predict. In stressed environments the performances of some providers may degrade while the performances of others, with elastic resources, may improve. The allocation of web-based providers to users (brokering) is modelled by a strategic non-cooperative angel-daemon game with risk profiles. A risk profile specifies a bound on the number of unreliable service providers within an environment without identifying the names of these providers. Risk profiles offer a means of analysing the behaviour of broker agents which allocate service providers to users. A Nash equilibrium is a fixed point of such a game in which no user can locally improve their choice of provider – thus, a Nash equilibrium is a viable solution to the provider/user allocation problem. Angel daemon games provide a means of reasoning about stressed environments and offer the possibility of designing brokers using risk profiles and Nash equilibria.
Resumo:
Potentially inappropriate prescribing in older people is common in primary care and can result in increased morbidity, adverse drug events, hospitalizations and mortality. In Ireland, 36% of those aged 70 years or over received at least one potentially inappropriate medication, with an associated expenditure of over €45 million.The main objective of this study is to determine the effectiveness and acceptability of a complex, multifaceted intervention in reducing the level of potentially inappropriate prescribing in primary care.
Resumo:
Matching query interfaces is a crucial step in data integration across multiple Web databases. The problem is closely related to schema matching that typically exploits different features of schemas. Relying on a particular feature of schemas is not suffcient. We propose an evidential approach to combining multiple matchers using Dempster-Shafer theory of evidence. First, our approach views the match results of an individual matcher as a source of evidence that provides a level of confidence on the validity of each candidate attribute correspondence. Second, it combines multiple sources of evidence to get a combined mass function that represents the overall level of confidence, taking into account the match results of different matchers. Our combination mechanism does not require use of weighing parameters, hence no setting and tuning of them is needed. Third, it selects the top k attribute correspondences of each source attribute from the target schema based on the combined mass function. Finally it uses some heuristics to resolve any conflicts between the attribute correspondences of different source attributes. Our experimental results show that our approach is highly accurate and effective.
Resumo:
Web sites that rely on databases for their content are now ubiquitous. Query result pages are dynamically generated from these databases in response to user-submitted queries. Automatically extracting structured data from query result pages is a challenging problem, as the structure of the data is not explicitly represented. While humans have shown good intuition in visually understanding data records on a query result page as displayed by a web browser, no existing approach to data record extraction has made full use of this intuition. We propose a novel approach, in which we make use of the common sources of evidence that humans use to understand data records on a displayed query result page. These include structural regularity, and visual and content similarity between data records displayed on a query result page. Based on these observations we propose new techniques that can identify each data record individually, while ignoring noise items, such as navigation bars and adverts. We have implemented these techniques in a software prototype, rExtractor, and tested it using two datasets. Our experimental results show that our approach achieves significantly higher accuracy than previous approaches. Furthermore, it establishes the case for use of vision-based algorithms in the context of data extraction from web sites.