Soon after graphene sheets were being produced on a laboratory scale routinely, researchers began producing the hydrogenated version graphane (with a hydrogen atom on each carbon). This step is one of many approaches aimed at harnessing graphene’s powerful conductivity and is also being explored for hydrogen storage and other potential applications (more info in this 2009 ScienceDaily article From Graphene to Graphane…). Despite the divergence from planarity which naturally accompanies the shift from sp2 to sp3 hybridization, graphane is considered a 2D material.
Brought to our attention by Christine Peterson, a new addition to the family of 2D honeycomb-lattice materials has arrived: germanane. Structurally analogous to graphane, germanane comprises hydrogenated, hexagonally arranged germanium atoms in single (or few) layer sheets. Like silicane and silicene (see companion post Silicene: silicon’s answer to graphene), germanane should have a band gap, possibly allowing it to be implemented sooner than graphene.
While bulk germanium was semi-successfully used to make the first transistors, its low resistivity at higher temperatures and high production costs limited its practical use, and silicon soon became the semiconductor of choice. But going nanoscale may be a game changer, if the right combination of performance, cost, and ease of manufacture can be found.
For most 2D materials, getting stable sheets is the first hurdle. In an important step toward production of germanane sheets, a research team led by Joshua Goldberger at Ohio State University has devised a method for chemical synthesis of germanane crystals, which can be exfoliated down to single layer sheets. The work is published in ACS Nano (abstract) and is described in a Gizmag article:
“…we’ve been searching for unique forms of silicon and germanium with advantageous properties, to get the benefits of a new material but with less cost and using existing technology.”
The resulting material has been shown to conduct electrons ten times faster than silicon (and five times faster than conventional germanium), meaning that it could carry a proportionately higher load if used in microchips. It’s also more chemically stable than silicon, not oxidizing in the presence of air or water, plus it’s much better at absorbing and emitting light – this means that it could prove particularly useful in solar cells.
Ordinarily, germanium takes the form of multilayered crystals. The single-atom-thick layers are bonded to one another, and each one is quite unstable on its own. The OSU researchers created their own germanium crystals, in which calcium atoms were inserted between the layers. That calcium was then dissolved using water, leaving empty chemical bonds in its absence. Those bonds were subsequently plugged with hydrogen, resulting in much more stable layers that could be peeled from the crystal while remaining intact.
A down side of germanium-based technologies may still be cost – germanium is far less abundant than silicon and carbon. From a Productive Nanosystems point of view, graphene technology may prevail in the long run due not only to performance metrics but to the abundance of carbon as well. For nearer-term, intermediate technologies, many hats remain in the ring. Although Group 14 elements are highlighted here, serious research into a broad range of 2D (especially honeycomb structured) materials has been around for a while and is growing fast.
-Posted by Stephanie C
I have a post up on the blog of the Sheffield Political Economy Research Institute – The failures of supply side innovation policy – discussing the connection between recent innovation policy in the UK and our current crisis of economic growth. Rather than cross-posting it here, I tell the same story in four graphs.
1. The UK’s current growth crisis follows a sustained period of national disinvestment in R&D
Red, left axis. The percentage deviation of real GDP per person from the 1948-1979 trend line, corresponding to 2.57% annual growth. Sources: solid line, 2012 National Accounts. Dotted line, March 2013 estimates from the Office for Budgetary Responsibility.
Blue, right axis. Total R&D intensity, all sectors, as percentage of GDP. Data: Eurostat.
No single measure can capture the overall performance of an economy, but the long term trajectory of real GDP per person in the UK tells an interesting story, with a clear discontinuity in 1979. From 1948 to 1979 this measure grew remarkably steadily; a best fit corresponds to 2.57% annual growth. Since 1979 we have seen two deep recessions, each followed by a period of faster growth that, in each case, didn’t quite make up the lost ground to the pre-1979 trend line, and proved unsustainable. The third recession, following the 2008 financial crisis, has been both deeper and more long lasting than previous recessions. Meanwhile, since 1980, there has been a substantial fall in the overall research intensity of the economy, measured by the fraction of GDP spent on research and development. Without claiming simple causality, my SPERI post looks at the relationship between our current growth crisis and this disinvestment in R&D.
2. Since 1980, the UK has moved from being one of the most R&D intensive economies in the developed world, to one of the least.
Total R&D expenditure as % of GDP. Data: Eurostat
This decline in R&D intensity in the UK has happened at a time when other countries have been increasing investment in research. These increases have been particularly marked in fast growing Asian countries like South Korea and China.
3. The overall decline in the UK’s R&D intensity has been driven primarily by a long- term decline in private sector R&D
Value of R&D performed by sector as % of GDP. Data: Eurostat
Most R&D is performed in the private sector, with other substantial contributions happening in Government laboratories and Universities. R&D in the combined Government and HE sectors in the UK dropped substantially in the 1980′s and then stabilised, but the largest decline has taken place in private sector R&D. One might be tempted to argue that this reflects changes in the sectoral balance of the UK economy, but a more detailed analysis by Alan Hughes and Andrea Mina shows that, even after adjusting for structural differences between countries, the business enterprise component of R&D remains low by international standards.
4. The relative value of R&D performed in the business sector in the UK has been falling since the mid-1980s and has slipped far behind key rivals
Value of R&D performed by the business sector as % of GDP. Source: Eurostat
The contrast between the declining R&D intensity of the UK’s private sector and its growth in competitor nations is marked. My SPERI blogpost The failures of supply side innovation policy explores what might lie behind this.
Recently we pointed at a Forbe’s interview with Eric Drexler, in anticipation of his pending new book Radical Abundance.
The book has shipped, and Drexler’s tour schedule now includes a few stops on the coasts of the U.S:
New York: May 6th
Los Angeles: May 8th & 9th
Seattle: May 9th
If you’ve been imagining an updated version of Nanosystems, you’re in for a surprise. The book invites us to take a remarkable journey through the personal and educational experiences that led Drexler to contemplate the global future and to develop the foundations and concepts of atomically precise manufacturing, through a surprisingly accessible tour of the nanoscale world, and through a deeply thoughtful discussion of not only crucial realities of revolutionary new technology, but of crucial uncertainties as well.
-Posted by Stephanie C
Next week – on the 26th March – I’m participating in a discussion event sponsored by the thinktank Policy Exchange at NESTA, in London. Also on the panel is K. Eric Drexler, the originator of the idea of nanotechnology in its most expansive form, as an emerging technology which, when fully developed, will have truly transformational effects. It will, in this view, allow us to make pretty much any material, device or artefact for little or no cost, we will be able to extend human lifespans almost indefinitely using cell-by-cell surgery, and we will create computers so powerful that they will host artificial intelligences greatly superior to those of humans. Drexler has a new book coming out in May – Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization. I think this view overstates the potential of the technology, and (it shocks me to realise), I have been arguing this in some technical detail for nearly ten years. Although I have met Drexler, and corresponded with him, this is the first time I will have shared a platform with him. To mark this occasion I have gone through my blog’s archives to make this anthology of my writings about Drexler’s vision of nanotechnology and my arguments with some of its adherents (who should not, of course, automatically be assumed to speak for Drexler himself).
To begin with, one should understand Drexler’s position by reading his own words. His first publication on the subject was a short paper in the journal Proceedings of the National Academy of Sciences, in 1981, Molecular engineering: An approach to the development of general capabilities for molecular manipulation. This paper demonstrated the possibility of artificial molecular machines by analogy with the protein-based molecular machines of biology, and argued that protein engineering is the natural route by which a second generation of artificial molecular machines, more powerful than their natural precursors could be made.
Drexler’s next publication was perhaps his most influential; this was his 1986 popular science book Engines of Creation: the coming era of nanotechnology. This explored the consequences of the molecular assemblers that he argued could be made from the second generation molecular machines, able to make virtually anything consistent with the basic laws of physics, atom-by-atom, with atomic precision. One consequence would be cell repair machines able to halt and reverse the effects of ageing and disease, leading to indefinite human lifespans.
Engines of Creation was not a technical book, so it did not include much more in the way of detail of how these universal assemblers would be made. This detail was provided in Drexler’s 1992 book, Nanosystems: Molecular Machinery, Manufacturing, and Computation. It’s difficult to imagine a book more different to Engines of Creation than Nanosystems. It’s almost gratuitously dry and technical, a textbook for a yet-to-be-developed technology, based on the principle that “molecular manufacturing applies the principles of mechanical engineering to chemistry”.
My own thinking on nanotechnology – summarised initially in my 2004 book Soft machines: nanotechnology and life – was at the same time inspired by Drexler’s work and a reaction against it. Like Drexler, I was fascinated by the example that cell biology provided of intricate, molecular scale machines. But I was also struck by the insights that the new single molecule biophysics was providing (using the new tools of nanoscience) – insights that stressed that the principles used by the molecular machines were not the principles of mechanical engineering, but a quite alien set of design principles optimised for the peculiar physics of the warm, wet, nanoscale world – the principles of soft nanotechnology.
I dealt with the question of what nanotechnology should learn from biology in this blog post – What biology does and doesn’t prove about nanotechnology – which was a riposte to some heated discussions on the blogs of the time. I came back to this question with a more reflective discussion of the same themes in my column in Nature Nanotechnology -
Right and wrong lessons from biology.
Moving to my criticisms to the vision of nanotechnology presented in Nanosystems, the context can be found in this piece: Molecular nanotechnology, Drexler and Nanosystems – where I stand. In making and doing I argued that matter is not digital, responding to quite extensive discussion of that post in bits and atoms. I outlined some specific technical issues in Six challenges for molecular nanotechnology.
My most widely circulated critique was published in the US magazine IEEE Spectrum – Rupturing the nanotech rapture. By this time Drexler’s vision of radical nanotechnology had become a central part of the belief package of transhumanists and proponents of the secular eschatology of the technological singularity, as most notably and influentially popularised by Ray Kurzweil in his book The Singularity is Near. My article was part of a special issue exploring, mostly from a critical perspective, this idea (misguided as it is, in my opinion). Nanobots, nanomedicine, Kurzweil, Freitas and Merkle was a response to criticisms of the IEEE Spectrum article.
Lately, Drexler has been writing on his own blog Metamodern. From there it is clear that we agree about some things – the importance of the “soft” route to radical nanotechnology in the near future, the achievements and potential of DNA nanotechnology, for example – and remain in disagreement about others. I look forward to discussing these issues with him on Tuesday.
“As liberals, we tell a one-sided story about the complex causes of America's political paralysis. We blame the conservative movement, Fox News, libertarian billionaires, and the "do nothing" Republicans in Congress. Much of this story is true. … But there is plenty of blame to go around. Over the past decade, liberals have become more like conservatives, adopting a win-at-all-costs commitment to policy debates and elections. …
The strategy has been dangerously misguided. Extreme polarization has served conservatives very well, driving moderate leaders from politics, promoting feelings of cynicism, inefficacy, and distrust among the public, and forcing Democrats to spend huge sums of money on canvassing, texting, social media, and celebrity appeals in order to turn out moderates, young people, and minorities on election day. Less clear is how America's escalating ideological arms race will conceivably serve liberals. Instead of going to war against the Right, liberals will better serve their social and political objectives by waging a war on polarization.”
[Excerpt from: Scheufele, D. A., & Nisbet, M. C. (forthcoming). Online news and the demise of political debate. In C. T. Salmon (Ed.), Communication Yearbook (Vol. 36). Newbury Park, CA: Sage. | PDF]
[W]ith more Americans saying that they get their news on a daily basis from online sources than from local newspapers (Purcell, Rainie, Mitchell, Rosenstiel, & Olmstead, 2010), the presentation, selection, and availability of news is no longer chiefly controlled by journalists. Nor is the primary goal to attract diverse audiences to a hierarchically organized portfolio of coverage defined by an entire broadcast or newspaper edition. Instead, the objective is to lure a combination of habitual and incidental news consumers to specific online stories by way of search engines, aggregators, and social networks. This strategy allows news organizations to maximize page views while also tracking and selling personal information about consumers via third party partners such as Facebook. At least three related trends enable this goal.
a. Opinionated news and niche audiences: The proliferation of niche cable channels such as MSNBC and Fox News and highly specialized online information environments such as Huffington Post or The Daily Caller have led to an increasing fractionalization of news choices and audiences. Driven by commercial concerns, much of this fractionalization has occurred along partisan fault lines. Or as Rachel Maddow put it: “Opinion-driven media makes the money that politically neutral media loses.” (Maddow, 2010, p. 22). And as more recent research shows, these fragmented news environments have the potential to produce more apathy among some segments of the electorate and more partisan polarization across the population overall (Prior, 2007).
b. Algorithms as editors: The increasing shift toward online presentation of news, even among traditional news outlets, has also provided media organizations with new real-time metrics of audience preference and the ability to make decisions about news selection and placement based on these metrics. This use of “algorithms as editors” (Peters, 2010) is not without pitfalls. Increasing the influence that reader preferences have on story selection and placement also increases the likelihood of a spiral of mutual reinforcement. In other words, stories that readers selectively attend to will be placed more prominently on news(paper) web sites, which – in turn – increases the odds of readers finding them in the first place. This makes it easy for readers to select content based on popularity, interest, or political identity; opting out of the professional hierarchy of front page headlines and lead stories that might appear in a printed newspaper or broadcast.
c. Self-reinforcing search and tagging spirals: This notion of reinforcing spirals is exacerbated in online search environments where search engine rankings and search suggestions can heavily influence the overall information infrastructure. The process depends not only on the algorithms used by search engines but also on the tagging and optimization strategies pursued by news content providers, aggregators, bloggers, and interest groups (Hindman, 2009). Examining the presentation of scientific information online, Ladwig and colleagues (Ladwig, Anderson, Brossard, Scheufele, & Shaw, 2010), for example, found that the “suggest” function in Google’s search results often did not correspond to the online information environment that was available to audiences (based on systematic analyses of the complete population of web sites and blogs). As a result, the guidance provided by Google search suggestions is likely to disproportionally drive traffic, regardless of the content available, and create a self-reinforcing spiral that reduces the complexity and diversity of the information that citizens encounter online (Ladwig, Anderson, Brossard, Scheufele, & Shaw, 2010).
... Many of these more media-centric filters work in tandem with individual-level behaviors and choices. Prior’s (2007) hypotheses about the polarizing effects of increasing channel diversity, for instance, are based heavily on the assumption that individuals actively make choices about the content (news vs. entertainment) that they attend to. But the social texture that is developing in web 2.0 information environments produces a communication landscape in which at least two new modes of audience-centric selectivity that are likely to influence news choices.
a. Automated selectivity: In online environments, news portals and aggregator sites allow for highly effective individual pre-selection of the information that reaches us. iGoogle, myYahoo and other news aggregators allow audiences to selectively receive and attend to news items, based on a set of fine-grained filters that can include medium, outlet, content, author and a host of other pre-defined criteria. In contrast, visitors to the landing page for online newspapers may be able to skim or skip stories that they disagree with or find boring, but they will have a hard time making a selective choice without at least briefly glancing at the lead or headline. Portals and other news aggregators – in contrast – will make sure that some stories never even reach our computer screen. Smart phones, tablets and other portable devices make it easier to skim and select when consuming news, creating further incentives for news organizations to cater to this selectivity in their design of mobile applications.
b. Networks as filters: This individual-level set of filters, however, is being complemented by maybe even more effective social filters. Based on a series of experiments about online information use patterns in various social settings, Messing and colleagues (2011), for example, predict that “social information, and especially personal recommendations, will emerge as the most important explanatory factor shaping both the media environment to which an individual is exposed, and the content that the individual chooses to view” (p. 29).
And the notion of networks as selective filters may be more prevalent than we think. Seventy-five percent of online news consumers now say they get news forwarded through email or posts on social networking sites (Purcell et al., 2010), i.e., information that is passed along and preselected by people who are strongly likely to share their worldviews and preferences. And much of this information is not presented in an isolated news environment, similar to traditional newspapers or television broadcasts, but instead is socially contextualized almost immediately by a host of reader comments, Facebook “like” buttons, and indicators of how often a story has been re-tweeted.
The potential effects of such social-level contextualization on individual news selection are less clear, and two competing hypotheses can be put forth ... The first hypothesis suggests that we may be moving toward a society where we are less and less exposed to (and less and less used to) disagreement and viewpoints that are different from our own. Highly like-minded and homophilic networks, in other words, may exacerbate the effects of individual-level selectivity and produce an even more fine-grained filter for incoming information. The result would be a very pronounced spiral of self-reinforcing attitude polarization ... Journalists and other professional groups such as scientists are likely to be part of this attitude polarization; since these groups tend to be disproportionately like-minded in their political outlook, are heavier users of online news sources and social media; and face greater demands on their time in managing and using information (Besley & Nisbet, forthcoming; Donsbach, 2004).
A number of recent studies, however, provide some preliminary evidence for a more optimistic hypothesis. It is based on the assumption that friendship networks may often be more politically diverse than the individuals in these networks perceive them to be. In other words, “friends disagree more than they think they do” (Goel, Mason, & Watts, 2010, p. 611). This also means that socially homophilic networks may be characterized by more political diversity than we often assume. Messing et al (2011), in fact, infer that socially-networked information environments can “create at least marginally more cross-cutting exposure to political information” (p. 30) than situations where individuals select news items without additional social cues.
It remains to be seen if these findings are replicated in future work and socially-networked information environments can in fact increase exposure to non-likeminded views. If they do, they could produce some of the same beneficial outcomes that we outlined in our work on heterogeneous face-to-face networks (Scheufele et al., 2006; Scheufele et al., 2004) ... It is clear that communication researchers have only begun to fill in parts of a large grid of research questions which will have to be answered in the near future. … Whatever the answers may be that we as a discipline provide, they will have important implications for how we conceptualize and measure communication effects, effectively design online media, educate professionals and the public, and regulate media content and platforms. But more importantly, they will raise normative questions about the future of a media system that – driven by media-centric or audience-centric shifts – no longer provides a commonly shared and professionally defined hierarchy of stories and ideas.
Abramowitz, A. (2009). The disappearing center. New Haven, CT: Yale University Press.
Bennett, W. L., & Iyengar, S. (2008). A new era of minimal effects? The changing foundations of political communication. Journal of Communication, 58(4), 707-731. doi: 10.1111/j.1460-2466.2008.00410.x
Besley, J., &; Nisbet, M. (forthcoming). How scientists view the public, the media and the political process. Public Understanding of Science. First published online August 30, 2011 as doi:10.1177/0963662511418743.
Bishop, B., & Cushing, R. (2008). The big sort: Why the clustering of like-minded America is tearing us apart. New York: Houghton Mifflin.
Brossard, D., Scheufele, D. A., Kim, E., & Lewenstein, B. V. (2009). Religiosity as a perceptual filter: Examining processes of opinion formation about nanotechnology. Public Understanding of Science, 18(5), 546–558. doi: 10.1177/0963662507087304
Donsbach, W. (1991). Medienwirkung trotz Selektion: Einflussfaktoren auf die Zuwendung zu Zeitungsinhalten [Media effects despite selection: Influences on attention to newspaper content]. Köln, Germany: Böhlau.
Donsbach, W. (2004). Psychology of news decisions. Journalism, 5(2), 131.
Downie, L. & Schudson, M. (2009, Oct. 19). The reconstruction of American journalism. Columbia Journalism Review. Retrieved November 29, 2011, from http://www.cjr.org/reconstruction/the_reconstruction_of_american.php?page=all.
Galtung, J., & Ruge, M. H. (1965). The structure of foreign news. Journal of Peace Research, 2(1), 64-91.
Gans, H. (1979). Deciding what’s news New York: Pantheon Books.
Goel, S., Mason, W., & Watts, D. J. (2010). Real and perceived attitude agreement in social networks. Journal of Personality and Social Psychology, 99(4), 611-621. doi: 10.1037/a0020697
Hindman, M. (2009). The Myth of Digital Democracy. Princeton, NJ: University Press.
Ho, S. S., Brossard, D., & Scheufele, D. A. (2008). Effects of value predispositions, mass media use, and knowledge on public attitudes toward embryonic stem cell research. International Journal of Public Opinion Research, 20(2), 171-192.
Kim, E., Scheufele, D. A., & Han, J. Y. (2011). Structure or predisposition? Exploring the interaction effect of discussion orientation and discussion heterogeneity on political participation. Mass Communication & Society, 14(4), 502-526. doi: 10.1080/15205436.2010.51346
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480-498. doi: 10.1037/0033-2909.108.3.480
Ladwig, P., Anderson, A. A., Brossard, D., Scheufele, D. A., & Shaw, B. (2010). Narrowing the nano discourse? Materials Today, 13(5), 52-54. doi: 10.1016/s1369-7021(10)70084-5
Maddow, R. (2010). Theodore H. White Lecture on Press and Politics [transcript]. Joan Shorenstein Center on the Press, Politics and Public Policy, Harvard University. Retrieved November 21, 2011, from http://www.hks.harvard.edu/presspol/prizes_lectures/th_white_lecture/transcripts/th_white_2010_maddow.pdf
McLeod, J. M., Scheufele, D. A., & Moy, P. (1999). Community, communication, and participation: The role of mass media and interpersonal discussion in local political participation. Political Communication, 16(3), 315-336.
McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social networks. Annual Review of Sociology, 27(1), 415-444. doi: 10.1146/annurev.soc.27.1.415
Messing, S., Westwood, S. J., & Lelkes, Y. (2011). Online media effects: Social, not political, reinforcement. Unpublished manuscript. Stanford University. Palo Alto, CA. Retrieved from http://www.stanford.edu/~messing/PopRecSrcNews2.pdf
Mutz, D. C. (2002a). The consequences of cross-cutting networks for political participation. American Journal of Political Science, 46(4), 838-855.
Mutz, D. C. (2002b). Cross-cutting social networks: Testing democratic theory in practice. American Political Science Review, 96(1), 111-126.
Nisbet, M. C. (2005). The competition for worldviews: Values, information, and public support for stem cell research. International Journal of Public Opinion Research, 17(1), 90-112.
Nisbet, M. C. & Scheufele, D. A. (2004). Political talk as a catalyst for online citizenship. Journalism & Mass Communication Quarterly, 81(4), 877-896.
Peters, J. W. (2010, July 5, 2010). At Yahoo, using searches to steer news coverage, The New York Times, p. B1. Retrieved from http://www.nytimes.com/2010/07/05/business/media/05yahoo.html
Prior, M. (2007). Post-broadcast democracy: How media choice increases inequality in political involvement and polarizes elections. Cambridge, MA: Cambridge University Press.
Purcell, K., Rainie, L., Mitchell, A., Rosenstiel, T., & Olmstead, K. (2010). Understanding the participatory news consumer. Pew Internet & American Life Project. Retrieved August 10, 2010, from http://www.pewinternet.org/Reports/2010/Online-News.aspx.
Scheufele, D. A. (2011). Modern citizenship or policy dead end? Evaluating the need for public participation in science policy making, and why public meetings may not be the answer. Paper #R-34, Joan Shorenstein Center on the Press, Politics and Public Policy Research Paper Series. Harvard University. Cambridge, MA. Retrieved from http://www.hks.harvard.edu/presspol/publications/papers/research_papers/r34_scheufele.pdf
Scheufele, D. A., Hardy, B. W., Brossard, D., Waismel-Manor, I. S., & Nisbet, E. (2006). Democracy based on difference: Examining the links between structural heterogeneity, heterogeneity of discussion networks, and democratic citizenship. Journal of Communication, 56(4), 728-753.
Scheufele, D. A., & Nisbet, M. C. (2002). Being a citizen online - New opportunities and dead ends. Harvard International Journal of Press-Politics, 7(3), 55-75.
Scheufele, D. A., Nisbet, M. C., & Brossard, D. (2003). Pathways to participation? Religion, communication contexts, and mass media. International Journal of Public Opinion Research, 15(3), 300-324.
Scheufele, D. A., Nisbet, M. C., Brossard, D., & Nisbet, E. C. (2004). Social structure and citizenship: Examining the impacts of social setting, network heterogeneity, and informational variables on political participation. Political Communication, 21(3), 315-338.
Tuchman, G. (1978). Making news: A study in the construction of reality. New York, NY: The Free Press.
Vallone, R. P., Ross, L., & Lepper, M. R. (1985). The hostile media phenomenon: Biased perception and perceptions of media bias in coverage of the Beirut massacre. Journal of Personality and Social Psychology, 59, 577-585.
White, D. M. (1950). The 'gatekeeper': A case study in the selection of news. Journalism Quarterly, 27(3), 383-390.
Molecules can be delivered through a tiny channel templated by one strand of DNA.
The developers are using this to deliver precise amounts of chemicals through the membrane of individual cells. This is highly cool, with all sorts of research implications. And eventually, perhaps therapeutic implications - they're talking about scaling it up to process 100,000 cells at a time.
So I got to wondering: If someone loaded up these reservoirs with two kinds of molecules, that would stick to each other but not to themselves, could this be used as an ink-jet printer at the nanoscale?
For starters, use one kind of molecule that will stick to a surface. Squirt it on and see if it worked. Then, scan the tip while you squirt.
Once you start using multiple kinds of molecules, you can perhaps build 3D structures. And with a patterned surface, it might be possible to get atomic precision.
With a million addressible reservoirs, and 10 ms per 1-nm voxel, it would be possible to build the volume of a human cell in a few hours.
Hat tip to Next Big Future.
$MTEntryPermalink$>')+'&title='+encodeURIComponent('$MTEntryTitle$>'), 'addthis', 'scrollbars=yes,menubar=no,width=620,height=520,resizable=yes,toolbar=no,location=no,status=no,screenX=200,screenY=100,left=200,top=100'); return false;" target="_blank" title="Bookmark and Share">
"Recent scientific breakthroughs, such as nanotechnology, are changing the world as we know it. Gold nanoshells, for both imaging and targeting tumors, have the potential to revolutionize cancer treatments. At the same time, nanotechnology has raised concerns about what it means to create and manipulate materials at the molecular scale that do not occur in nature. With over 1,000 nano-based consumer end products entering the market in the past few years, consumer advocates, academics, and policy makers are scrambling to weigh the risks and benefits of this new technology and its applications. How do we form opinions even though most of us lack a comprehensive scientific understanding of emerging scientific fields? How do we use our personal values and moral standards to make sense of scientific facts? And why does all of this matter for the global leadership role of the U.S.--both economically and technologically--in a rapidly changing post 9-11world? Join Dietram Scheufele at the 2011 Wisconsin Science Festival for a crash course on making sense of breakthrough technologies that have the potential to transform virtually all aspects of our everyday lives."