897 resultados para sicurezza, exploit, XSS, Beef, browser
Resumo:
Cost estimating is a key task within Quantity Surveyors’ (QS) offices. Provision of an accurate estimate is vital to ensure that the objectives of the client are met by staying within the client’s budget. Building Information Modelling (BIM) is an evolving technology that has gained attention in the construction industries all over the world. Benefits from the use of BIM include cost and time savings if the processes used by the procurement team are adapted to maximise the benefits of BIM. BIM can be used by QSs to automate aspects of quantity take-off and the preparation of estimates, decreasing turnaround time and assist in controlling errors and inaccuracies. The Malaysian government has decided to require the use of BIM for its projects beginning from 2016. However, slow uptake is reported in the use of BIM both within companies and to support collaboration within the Malaysian industry. It has been recommended that QSs to start evaluating the impact of BIM on their practices. This paper reviews the perspectives of QSs in Malaysia towards the use of BIM to achieve more dependable results in their cost estimating practice. The objectives of this paper include identifying strategies in improving practice and potential adoption drivers that lead QSs to BIM usage in their construction projects. From the expert interviews, it was found out that, despite still using traditional methods and not practising BIM, the interviewees still acquire limited knowledge related to BIM. There are some drivers that potentially motivate them to employ BIM in their practices. These include client demands, innovation in traditional methods, speed in estimating costs, reduced time and costs, improvement in practices and self-awareness, efficiency in projects, and competition from other companies. The findings of this paper identify the potential drivers in encouraging Malaysian Quantity Surveyors to exploit BIM in their construction projects.
Resumo:
•EMT is important for embryonic development, wound healing, and placentation. •Some cancers appear to exploit this process for increased metastatic potential. •Therefore, this pathway is of great therapeutic interest in the treatment of cancer. The spread of cancer cells to distant organs represents a major clinical challenge in the treatment of cancer. Epithelial–mesenchymal transition (EMT) has emerged as a key regulator of metastasis in some cancers by conferring an invasive phenotype. As well as facilitating metastasis, EMT is thought to generate cancer stem cells and contribute to therapy resistance. Therefore, the EMT pathway is of great therapeutic interest in the treatment of cancer and could be targeted either to prevent tumor dissemination in patients at high risk of developing metastatic lesions or to eradicate existing metastatic cancer cells in patients with more advanced disease. In this review, we discuss approaches for the design of EMT-based therapies in cancer, summarize evidence for some of the proposed EMT targets, and review the potential advantages and pitfalls of each approach
Resumo:
We present an algorithm for multiarmed bandits that achieves almost optimal performance in both stochastic and adversarial regimes without prior knowledge about the nature of the environment. Our algorithm is based on augmentation of the EXP3 algorithm with a new control lever in the form of exploration parameters that are tailored individually for each arm. The algorithm simultaneously applies the “old” control lever, the learning rate, to control the regret in the adversarial regime and the new control lever to detect and exploit gaps between the arm losses. This secures problem-dependent “logarithmic” regret when gaps are present without compromising on the worst-case performance guarantee in the adversarial regime. We show that the algorithm can exploit both the usual expected gaps between the arm losses in the stochastic regime and deterministic gaps between the arm losses in the adversarial regime. The algorithm retains “logarithmic” regret guarantee in the stochastic regime even when some observations are contaminated by an adversary, as long as on average the contamination does not reduce the gap by more than a half. Our results for the stochastic regime are supported by experimental validation.
Resumo:
Background Bien Hoa and Da Nang airbases were bulk storages for Agent Orange during the Vietnam War and currently are the two most severe dioxin hot spots. Objectives This study assesses the health risk of exposure to dioxin through foods for local residents living in seven wards surrounding these airbases. Methods This study follows the Australian Environmental Health Risk Assessment Framework to assess the health risk of exposure to dioxin in foods. Forty-six pooled samples of commonly consumed local foods were collected and analyzed for dioxin/furans. A food frequency and Knowledge–Attitude–Practice survey was also undertaken at 1000 local households, various stakeholders were involved and related publications were reviewed. Results Total dioxin/furan concentrations in samples of local “high-risk” foods (e.g. free range chicken meat and eggs, ducks, freshwater fish, snail and beef) ranged from 3.8 pg TEQ/g to 95 pg TEQ/g, while in “low-risk” foods (e.g. caged chicken meat and eggs, seafoods, pork, leafy vegetables, fruits, and rice) concentrations ranged from 0.03 pg TEQ/g to 6.1 pg TEQ/g. Estimated daily intake of dioxin if people who did not consume local high risk foods ranged from 3.2 pg TEQ/kg bw/day to 6.2 pg TEQ/kg bw/day (Bien Hoa) and from 1.2 pg TEQ/kg bw/day to 4.3 pg TEQ/kg bw/day (Da Nang). Consumption of local high risk foods resulted in extremely high dioxin daily intakes (60.4–102.8 pg TEQ/kg bw/day in Bien Hoa; 27.0–148.0 pg TEQ/kg bw/day in Da Nang). Conclusions Consumption of local “high-risk” foods increases dioxin daily intakes far above the WHO recommended TDI (1–4 pg TEQ/kg bw/day). Practicing appropriate preventive measures is necessary to significantly reduce exposure and health risk.
Resumo:
The overarching aim of biomimetic approaches to materials synthesis is to mimic simultaneously the structure and function of a natural material, in such a way that these functional properties can be systematically tailored and optimized. In the case of synthetic spider silk fibers, to date functionalities have largely focused on mechanical properties. A rapidly expanding body of literature documents this work, building on the emerging knowledge of structure–function relationships in native spider silks, and the spinning processes used to create them. Here, we describe some of the benchmark achievements reported until now, with a focus on the last five years. Progress in protein synthesis, notably the expression on full-size spidroins, has driven substantial improvements in synthetic spider silk performance. Spinning technology, however, lags behind and is a major limiting factor in biomimetic production. We also discuss applications for synthetic silk that primarily capitalize on its nonmechanical attributes, and that exploit the remarkable range of structures that can be formed from a synthetic silk feedstock.
Resumo:
Patents provide monopoly rights to patent holders. There are safeguards in patent regime to ensure that exclusive right of the patent holder is not misused. Compulsory licensing is one of the safeguards provided under TRIPS using which patent granting state may allow a third party to exploit the invention without patent holder’s consent upon terms and conditions decided by the government. This concept existed since 1623 and was not introduced by TRIPS for the first time. But this mechanism has undergone significant changes especially in post-TRIPS era. History of evolution of compulsory licensing is one of the least explored areas of intellectual property law. This paper undertakes an analysis of different phases in the evolution of the compulsory licensing mechanism and sheds light on reasons behind developments especially after TRIPS.
Resumo:
Many wireless applications demand a fast mechanism to detect the packet from a node with the highest priority ("best node") only, while packets from nodes with lower priority are irrelevant. In this paper, we introduce an extremely fast contention-based multiple access algorithm that selects the best node and requires only local information of the priorities of the nodes. The algorithm, which we call Variable Power Multiple Access Selection (VP-MAS), uses the local channel state information from the accessing nodes to the receiver, and maps the priorities onto the receive power. It is based on a key result that shows that mapping onto a set of discrete receive power levels is optimal, when the power levels are chosen to exploit packet capture that inherently occurs in a wireless physical layer. The VP-MAS algorithm adjusts the expected number of users that contend in each step and their respective transmission powers, depending on whether previous transmission attempts resulted in capture, idle channel, or collision. We also show how reliable information regarding the total received power at the receiver can be used to improve the algorithm by enhancing the feedback mechanism. The algorithm detects the packet from the best node in 1.5 to 2.1 slots, which is considerably lower than the 2.43 slot average achieved by the best algorithm known to date.
Resumo:
Effects of plant height on Fusarium crown rot (FCR) disease severity were investigated using 12 pairs of near-isogenic lines (NILs) for six different reduced height (Rht) genes in wheat. The dwarf isolines all gave better FCR resistance when compared with their respective tall counterparts, although the Rht genes involved in these NILs are located on several different chromosomes. Treating plants with exogenous gibberellin increased FCR severity as well as seedling lengths in all of the isolines tested. Analysis of the expression of several defense genes with known correlation with resistance to FCR pathogens between the Rht isolines following FCR inoculation indicated that the better resistance of the dwarf isolines was not due to enhanced defense gene induction. These results suggested that the difference in FCR severity between the tall and dwarf isolines is likely due to their height difference per se or to some physiological and structural consequences of reduced height. Thus, caution should be taken when considering to exploit any FCR locus located near a height gene.
Resumo:
The 2008 US election has been heralded as the first presidential election of the social media era, but took place at a time when social media were still in a state of comparative infancy; so much so that the most important platform was not Facebook or Twitter, but the purpose-built campaign site my.barackobama.com, which became the central vehicle for the most successful electoral fundraising campaign in American history. By 2012, the social media landscape had changed: Facebook and, to a somewhat lesser extent, Twitter are now well-established as the leading social media platforms in the United States, and were used extensively by the campaign organisations of both candidates. As third-party spaces controlled by independent commercial entities, however, their use necessarily differs from that of home-grown, party-controlled sites: from the point of view of the platform itself, a @BarackObama or @MittRomney is technically no different from any other account, except for the very high follower count and an exceptional volume of @mentions. In spite of the significant social media experience which Democrat and Republican campaign strategists had already accumulated during the 2008 campaign, therefore, the translation of such experience to the use of Facebook and Twitter in their 2012 incarnations still required a substantial amount of new work, experimentation, and evaluation. This chapter examines the Twitter strategies of the leading accounts operated by both campaign headquarters: the ‘personal’ candidate accounts @BarackObama and @MittRomney as well as @JoeBiden and @PaulRyanVP, and the campaign accounts @Obama2012 and @TeamRomney. Drawing on datasets which capture all tweets from and at these accounts during the final months of the campaign (from early September 2012 to the immediate aftermath of the election night), we reconstruct the campaigns’ approaches to using Twitter for electioneering from the quantitative and qualitative patterns of their activities, and explore the resonance which these accounts have found with the wider Twitter userbase. A particular focus of our investigation in this context will be on the tweeting styles of these accounts: the mixture of original messages, @replies, and retweets, and the level and nature of engagement with everyday Twitter followers. We will examine whether the accounts chose to respond (by @replying) to the messages of support or criticism which were directed at them, whether they retweeted any such messages (and whether there was any preferential retweeting of influential or – alternatively – demonstratively ordinary users), and/or whether they were used mainly to broadcast and disseminate prepared campaign messages. Our analysis will highlight any significant differences between the accounts we examine, trace changes in style over the course of the final campaign months, and correlate such stylistic differences with the respective electoral positioning of the candidates. Further, we examine the use of these accounts during moments of heightened attention (such as the presidential and vice-presidential debates, or in the context of controversies such as that caused by the publication of the Romney “47%” video; additional case studies may emerge over the remainder of the campaign) to explore how they were used to present or defend key talking points, and exploit or avert damage from campaign gaffes. A complementary analysis of the messages directed at the campaign accounts (in the form of @replies or retweets) will also provide further evidence for the extent to which these talking points were picked up and disseminated by the wider Twitter population. Finally, we also explore the use of external materials (links to articles, images, videos, and other content on the campaign sites themselves, in the mainstream media, or on other platforms) by the campaign accounts, and the resonance which these materials had with the wider follower base of these accounts. This provides an indication of the integration of Twitter into the overall campaigning process, by highlighting how the platform was used as a means of encouraging the viral spread of campaign propaganda (such as advertising materials) or of directing user attention towards favourable media coverage. By building on comprehensive, large datasets of Twitter activity (as of early October, our combined datasets comprise some 3.8 million tweets) which we process and analyse using custom-designed social media analytics tools, and by using our initial quantitative analysis to guide further qualitative evaluation of Twitter activity around these campaign accounts, we are able to provide an in-depth picture of the use of Twitter in political campaigning during the 2012 US election which will provide detailed new insights social media use in contemporary elections. This analysis will then also be able to serve as a touchstone for the analysis of social media use in subsequent elections, in the USA as well as in other developed nations where Twitter and other social media platforms are utilised in electioneering.
Resumo:
Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.
Resumo:
The north Australian beef industry is complex and dynamic. It is strategically positioned to access new and existing export markets. To prosper in a global economy, it will require strong processing and live cattle sectors, continued rationalisation of infrastructure, uptake of appropriate technology, and the synergy obtained when industry sectors unite and cooperate to maintain market advantage. Strategies to address food safety, animal welfare, the environment and other consumer concerns must be delivered. Strategic alliances with quality assurance systems will develop. These alliances will be based on economies of scale and on vertical cooperation, rather than vertical integration. Industry sectors will need to increase their contribution to Research, Development and Extension. These contributions need to be global in outlook. Industry sectors should also be aware that change (positive or negative) in one sector will impact on other sectors. Feedback along the food chain is essential to maximise productivity and market share.
Resumo:
A strong world demand and current firm prices for goat meat provides opportunities for some wool/beef production enterprises in western Queensland to increase farm viability through diversification. In particular, there is rising interest in the use of Boer goats to improve productive performance of the Australian feral goat. Pastoral graziers have noted the high prolificacy of feral goats grazed in semi-arid areas, but there is no information on the breeding ability of feral does mated to Boer bucks. Animal production for a consuming world : proceedings of 9th Congress of the Asian-Australasian Association of Animal Production Societies [AAAP] and 23rd Biennial Conference of the Australian Society of Animal Production [ASAP] and 17th Annual Symposium of the University of Sydney, Dairy Research Foundation, [DRF]. 2-7 July 2000, Sydney, Australia.
Resumo:
The rise in demand by domestic and export markets for a high quality uniform beef carcase has led to more steers being finished in feedlots. However, the profitability of feedlotting is small and economic survival hinges on efficiency (Ryan 1990). Lack of published data prevents conclusions being drawn about the level of efficiency of Australian feedlotting operations but the few studies reported show considerable variation in liveweight performance and carcase characteristics such as fat depth and marbling (Baud et al.) 21st Biennial Conference. 8-12 July, University of Queensland, Brisbane.
Resumo:
Beef producers have expressed concern that cattle moved from one location to another do not always perform as well as comparable local cattle. Research station records and field trial data were examined to determine the effect of relocation on growth rate using data sets for animals of different age and liveweight at relocation and of different genotypes. 21st Biennial Conference. 8-12 July University of Queensland, Brisbane.
Resumo:
Like an Icebreaker: The Finnish Seamen s Union as collective bargaining maverick and champion of sailors social safety 1944-1980. The Finnish Seamen's Union (FSU), which was established on a national basis in 1920, was one of the first Finnish trade unions to succeed in collective bargaining. In the early 1930s, the gains made in the late 1920s were lost, due to politically based internal rivalries, the Great Depression, and a disastrous strike. Unexpectedly the FSU survived and went on promoting the well-being of its members even during World War II. After the war the FSU was in an exceptionally favorable position to exploit the introduction of coordinated capitalism, which was based on social partnership between unions, employers and government. Torpedoes, mines and confiscations had caused severe losses to the Finnish merchant marine. Both ship-owners and government alike understood the crucial importance of using the remaining national shipping capacity effectively. The FSU could no longer be crushed, and so, in 1945, the union was allowed to turn all ocean-going Finnish ships into closed shops. The FSU also had another source of power. After the sailors of the Finnish icebreaker fleet also joined its ranks, the FSU could, in effect, block Finnish foreign trade in wintertime. From the late 1940s to the 1960s the union started and won numerous icebreaker strikes. Finnish seamen were thus granted special pension rights, reductions on income taxes and import duties, and other social privileges. The FSU could neither be controlled by union federations nor intimidated by employers or governments. The successful union and its tactically clever chairperson, Niilo Välläri, were continuously but erroneously accused of syndicalism. Välläri did not aim for socialism but wanted the Finnish seamen to get all the social benefits that capitalism could possibly offer. Välläri s policy was successfully followed by the FSU until the late 1980s when Finnish ship-owners were allowed to flag their vessels outside the national registry. Since then the FSU has been on the defensive and has yielded to pay cuts. The FSU members have not lost their social benefits, but they are under constant fear of losing their jobs to cheap foreign labor.