Trending Misterio
iVoox
Descargar app Subir
iVoox Podcast & radio
Descargar app gratis
Top Speakers Visiting Microsoft Research
Top Speakers Visiting Microsoft Research
Podcast

Top Speakers Visiting Microsoft Research 4e4rr

Por Microsoft
54
3

Watch the latest talks by guest speakers visiting Microsoft Research. These popular talks are by remarkable people who change the world every day: to name but a few, past speakers include Malcolm Gladwell, Baratunde Thurston, Susan Cain, Greg Bear, Rukmini Banerji, and William Gibson. 5ea4s

Watch the latest talks by guest speakers visiting Microsoft Research. These popular talks are by remarkable people who change the world every day: to name but a few, past speakers include Malcolm Gladwell, Baratunde Thurston, Susan Cain, Greg Bear, Rukmini Banerji, and William Gibson.

54
3
Abstracts: September 13, 2023
Abstracts: September 13, 2023
Episode 148 | September 13, 2023 of the research community at Microsoft work continuously to advance their respective fields. Abstracts brings its audience to the cutting edge with them through short, compelling conversations about new and noteworthy achievements.   In the inaugural episode of the series, Dr. Ava Amini and Dr. Kevin K. Yang, both Senior Researchers with Microsoft Health Futures, host Dr. Gretchen Huizinga to discuss “Protein generation with evolutionary diffusion: Sequence is all you need.” The paper introduces EvoDiff, a suite of models that leverages evolutionary-scale protein data to help design novel proteins more efficiently. Improved protein engineering has the potential to help create new vaccines to prevent disease and new ways to recycle plastics. View the preprint paper Play podcast Subscribe to the Microsoft Research Podcast: Apple Podcasts Email Android Spotify RSS Feed Transcript [MUSIC PLAYS] GRETCHEN HUIZINGA: Welcome to Abstracts, a Microsoft Research Podcast that puts the spotlight on world-class research in brief. I’m Dr. Gretchen Huizinga. In this series, of the research community at Microsoft give us a quick snapshot—or a podcast abstract!—of their new and noteworthy papers. [MUSIC FADES] Today, I’m talking to Dr. Ava Amini and Dr. Kevin Yang, both senior researchers at Microsoft Health Futures. Ava and Kevin are co-authors of a paper titled “Protein generation with evolutionary diffusion: Sequence is all you need,” and a preprint of the paper is available now on bioRxiv. Ava and Kevin, thanks for ing us on Abstracts!  KEVIN YANG: Thanks for having us.  AVA AMINI: Thank you so much.  HUIZINGA: So, Kevin, in just a couple sentences, tell us what problem this research addresses and why people should care. YANG: Yeah, so proteins are this really big, important family of biomolecules, and they’re responsible for a lot of cellular processes. For example, hemoglobin carries oxygen in your blood, and insulin regulates your blood sugar levels. And people are interested in generating new proteins to do things that people care about—not necessarily in our bodies, but we’re interested in proteins as industrial enzymes so for catalysis and to make new chemicals or for therapeutics to make new drugs. And as a step towards this goal, we train a suite of models that we call EvoDiff that learns to generate realistic but novel proteins. So proteins do a lot of useful things in nature, but we can really expand their repertoire to do things that people care about but that nature may not really care about. One really good historical example of this is that most of our modern laundry detergents contain enzymes that break down things that stain your clothes. And these enzymes were based on natural proteins, but natural proteins don’t work under high heat. They don’t work in detergent. So somebody engineered those to work in the conditions of our washing machine. And they work really well nowadays. Looking forward, we look at some of the challenges facing our world, such as sustainability. So some really big things people are working on now are things like enzymes that can break down plastic and help us recycle plastic or enzymes that can perform photosynthesis more efficiently. And then on the other side, there’s therapeutics, and an obvious example there is vaccine design. So deg vaccines quickly and safely for new diseases as they emerge.   HUIZINGA: Ava, how does your approach build on or differ from what’s been done previously in this field?  AMINI: Yeah, so we call our approach EvoDiff, and EvoDiff has two components. The first, Evo, refers to evolutionary, and the second, Diff, refers to this notion of diffusion. And the two things that make our approach cool and powerful is the fact that we are leveraging data about proteins that is at an evolutionary scale in of the size and the diversity of the datasets about natural proteins that we use. And specifically, we use that data to build a type of AI model that is called a diffusion model. Now, for a little backstory on this, a few years ago, we in the AI community learned that we can do really well in generating brand-new images by taking natural images, adding small amounts of noise to them, corrupting them, and then training an AI model called a diffusion model to remove that noise. And so what we’ve done in this paper is that we have constructed and trained these diffusion models to do the same kind of process on protein data at evolutionary scale.  HUIZINGA: Kevin, back to you, let’s go a little deeper on methodology. How did you do this? YANG: Yeah, so we really wanted to do this in a protein sequence space. So in protein biology, you have sequences of amino acids. So that’s a series of amino acid monomers that form a chain, and then that chain folds oftentimes into a 3D structure. And function is usually mediated by that 3D structure. Unfortunately, it’s difficult and can be slow and expensive to obtain experimental structures for all these proteins. And so previous diffusion models of proteins have really focused on generating a three-dimensional structure. And then you can use some other method to find a sequence that will fold to that structure. But what we really wanted to do was generate proteins directly as sequences because it’s much easier to get sequences than it is to get structure. So there’s many, many more sequences out there than there are structures. And we know that deep learning methods scale really well as you increase the size and quality of the datasets they’re trained on. And so we … and by we, it’s me and Ava but also Nitya Thakkar, who was an undergraduate intern last summer with me and Ava, and then Sarah Alamdari, our data scientist, who also did a lot of the hands-on programming for this. And then we also got a lot of help from Rianne van den Berg, who is at AI4Science, and then Alex Lu and Nicolò Fusi, also here in New England. So we went and got these large, diverse, evolutionary datasets of protein sequences, and then we used a deep learning framework called PyTorch to train these diffusion models. And then we do a lot of computational experiments to see whether they do the things we want them to do, which Ava, I think, will talk about next.  HUIZINGA: Right. Right. So, Ava, yes, what were your major findings? AMINI: Yeah, the first question we really asked was, can our method, EvoDiff, generate proteins that are new, that are realistic, and that are diverse, meaning they’re not similar to proteins that exist in nature but still are realistic? And so what we found was that indeed, we can do this, and we can do this really well. In fact, the generated proteins from our method show a better coverage of the whole landscape of structural features, functional features, and features in sequence space that exist amongst proteins in nature. And so that was our first really exciting result, that we could generate proteins that were really of high quality using our method. The second thing we asked was, OK, now if we give some context to the model, a little bit of information, can we guide the generation to fulfill particular properties that we want to see in that protein? And so specifically here, we experimented with two types of experiments where first, we can give a part of the protein to the model, let’s say, a part of the protein that binds to another protein. And we hold that part constant and ask the model to generate the sequence around that. And we see that we can do really well on this task, as well. And why that’s important is because it means we can now design new proteins that meet some criteria that we, the s, want the protein to have. For example, the ability to bind to something else. And finally, the last really exciting result was … one point that we’ve talked about is why we want to do this generation in sequence space rather than structure—because structure is difficult, it’s expensive, and there are particular types of proteins that don’t actually end up folding into a final 3D structure. They’re what we call disordered. And these types of disordered proteins have really, really important roles in biology and in disease. And so what we show is that because we do our generation and design in protein sequence space, we can actually generate these types of disordered proteins that are completely inaccessible to methods that rely on using information about the protein’s 3D shape.  HUIZINGA: So, Kevin, building on Ava’s description there of the structure and sequence space, how is your work significant in of real-world impact?  YANG: Right, so there’s a lot of interest in deg or generating new proteins that do useful things as therapeutics or as industrial catalysts and for a lot of other things, as well. And what our work really does is it gives us a method that can reliably generate high-quality proteins directly in sequence space. And this is good because now we can leverage evolutionary-scale data to do this on any downstream protein engineering problem without relying on a structure-based design or structure-based data. And we’re hoping that this opens up a lot of possibilities for protein engineering, protein design, and we’re really excited about some new experimental work that we—and we hope others—will use to build on this method. HUIZINGA: Are you guys the first to move into the evolutionary scale in this? Is that a differentiator for your work?  YANG: So there have been a few other preprints or papers that talk about applying diffusion to protein sequences. The difference here is that, yes, like I said, we’re the first ones to do this at evolutionary scale. So people will also train these models on small sets of related protein sequences. For example, you might go look for an enzyme family and find all the sequences in nature of that family and train a model to generate new examples of that enzyme. But what we’re doing is we’re looking at data that’s from all different species and all different functional classes of proteins and giving us a model that is hopefully universal or as close to universal as we can get for protein sequence space.  HUIZINGA: Wow. Ava, if there was one thing you want listeners to take away from this work, what would it be?  AMINI: If there’s one thing to take away, I think it would be this idea that we can and should do protein generation over sequence because of the generality we’re able to achieve, the scale that we’re able to achieve, and the modularity and that our diffusion framework gives us the ability to do that and also to control how we design these proteins to meet specific functional goals.  HUIZINGA: So, Kevin, to kind of wrap it up, I wonder if you could address what unanswered questions still remain, or unsolved problems in this area, and what’s next on your research agenda.  YANG: So there’s kind of two directions we want to see here. One is, we want to test better ideas for conditioner models. And what I mean there is we want to feed in text or a desired chemical reaction or some other function directly and have it generate those things that will then go work in the lab. And that’s a really big step up from just generating sequences that work and are novel. And two is, in biology and in protein engineering, models are really good, but what really matters is, do things work in the lab? So we are actually looking to do some of our own experiments to see if the proteins we generate from EvoDiff work as desired in the lab.  [MUSIC PLAYS] HUIZINGA: Ava Amini and Kevin Yang, thanks so much for ing us today, and to our listeners, thanks for tuning in. If you’re interested in learning more about the paper, you can find a link at aka.ms/abstracts or you can find a preprint of the paper on bioRxiv. See you next time on Abstracts! Show more Opens in a new tabThe post Abstracts: September 13, 2023 appeared first on Microsoft Research.
Internet y tecnología 1 año
0
0
6
12:42
AI Explainer: Foundation models ​and the next era of AI
AI Explainer: Foundation models ​and the next era of AI
The release of OpenAI’s GPT-4 is a significant advance that builds on several years of rapid innovation in foundation models. GPT-4, which was trained on the Microsoft Azure AI supercomputer, has exhibited significantly improved abilities across many dimensions—from summarizing lengthy documents, to answering complex questions about a wide range of topics and explaining the reasoning behind those answers, to telling jokes and writing code and poetry. Microsoft Senior Principal Research Manager Ahmed H. Awadallah was among a group of researchers across the company who have worked in partnership with OpenAI over several months to evaluate this new model’s capabilities. In this video, recapped below, he tells the story of the technical innovations in recent years that have brought us to this moment: the surprising progress of GPT-4’s predecessor models, leading up to the capabilities demonstrated in ChatGPT, and the integration of the latest models into Bing. In this article Introduction to foundation models [00:00-11:01] From GPT-3 to ChatGPT – a jump in generative capabilities [11:02-19:07] Everyday impact: Integrating foundation models and products [19:09-27:20] Transcript While watching this video, you can hover to see video chapter titles and jump directly to those you’re interested in. Read full video transcript Introduction to foundation models [00:00-11:01] Over the last decade, AI has made significant progress on perception tasks like image recognition and language processing. More recently, the field is witnessing new advances in the form of generative AI, underpinned by a class of large-scale models known as foundation models. Foundation models are trained on massive amounts of data and are capable of performing a wide range of tasks. With a simple natural language prompt like “describe a scene of the sun rising over the beach,” generative AI models can output a detailed description or produce an image based on the generated description, which can then be animated or even turned into video. Many recent language models are not only good at generating text but also generating, explaining, and debugging code. Listen in at 1:37 Three components have been driving these advances: The transformer architecture: A popular choice across modalities, the transformer architecture is efficient, easy to scale and parallelize, and can model interdependence between different components in input and output data. Scale: Growing model size and the use of increasingly large amounts of data have resulted in what is being termed as “emerging capabilities.” When models reach a critical size, they begin displaying capabilities not previously present. In-context learning: Showing potential on a range of applications, from text classification to translation and summarization, this new training paradigm provides pre-trained models with instructions for new tasks or just a few examples instead of training or fine-tuning models on labeled data. Because no additional data or training is needed and prompts are provided in natural language, models can be applied right out of the box and aren’t limited to those with developer experience. From GPT-3 to ChatGPT – a jump in generative capabilities [11:02-19:07] With the November 2022 release of ChatGPT, a language model optimized for dialogue, we saw exciting developments in text generation. Compared with GPT-3, an earlier language model in the GPT family, ChatGPT not only provides longer, more thorough, and more structured responses to questions and instructions but can also produce answers in different styles, or tones, and tailor explanations to different audiences, like a child, a first-year college student, or someone with a PhD. Earlier language models such as GPT-3 were trained to predict the next word in a sentence using large amounts of text from the web with no direct human supervision. Several additional training approaches have helped fuel the improved performance of later models such as ChatGPT. These models are being trained on code in addition to text, which seems to be providing another opportunity to identify the relationship between different parts of speech. This is resulting in models that are better at following instructions and reasoning than models trained on text alone. Human-generated data is also contributing to better outputs. Instruction tuning adds the step of training models on prompts and responses created by a human, while model-generated responses ranked by a human are being employed to train a reward model that can be used to train the main model with reinforcement learning. The fast-paced advancements demonstrated by these models have challenged one of the traditional methods used to measure progress: benchmarks. Improvements are happening so fast that benchmarks are becoming obsolete, with many solved or saturated as quickly as they come out. Everyday impact: Integrating foundation models and products [19:09-27:20] Foundation models are already appearing in products available today. For example, GitHub Copilot leverages OpenAI Codex to assist in writing code. The AI pair programmer has been shown to not only make developers feel more productive but to them in actually getting more done. A GitHub study found participants using Copilot were 55 percent more productive than participants without access to Copilot. Combining language models optimized for dialogue with external knowledge sources and tools is another avenue for improved experiences. The new Bing, for instance, brings together these models and search. Years of research have yielded insight into the web search experience; much of it involves reviewing and synthesizing information across a variety of resources identified via multiple queries, which is time-consuming. The new Bing can do the heavy lifting for the searcher, working behind the scenes to make the necessary queries, collect results, synthesize the information, and present a single complete answer. Large language models and foundation models more broadly are not without their limitations, however. There are issues such as reliability, accuracy, staleness, and provenance that need to be explored. Additionally, each specific application of one of these models comes with its own challenges and opportunities. For example, in applying foundation models to web search, we need to rethink the overall experience, including how people interact with search and how we improve, measure, and personalize the experience over time. Listen in at 27:48 Transcript Introduction to foundation models [00:00–11:01] Hello, everyone. My name is Ahmed Awadallah. I am a researcher here at Microsoft Research. Today, I am going to be talking about foundation models and the impact they are having on the current era of AI. If we look back at the last five to 10 years, AI has been making significant impact on many perception tasks like image and object recognition, speech recognition, and most recently on language understanding tasks, where we have been seeing different AI models achieving superior performance and in many cases reaching performance equal to what a human annotator would do on the same task. Over the last couple of years, though, the frontier of AI has changed toward generative AI.  We have had quite good text generation models for some time. You could actually prompt a model with asking it to describe an imaginary scene, and it will produce a very good description of what you have asked it to do. And then we started making a lot of progress on image generation, as well. With models like DALL-E 2 and Imagen and even models coming out from such startups like Midjourney and Stability AI, we have been getting to a level of quality of image generation that we have never seen before. Inspired by that, there has been also a lot of work on animating the generated images or even generating videos from scratch. Another frontier for generative models has been code, and not only generating code based on text prompt but also explaining the code or in some cases even debugging the code. I was listening to this episode of the Morning Edition on NPR when it aired at the beginning of February where they were attempting to use a bunch of AI models for producing a schematic design of a rocket and also for coming up with some equations for the rocket design. And, of course, the hypothetical design would have crashed and burned, but I couldn’t help but think how exciting it is that AI has become so good that we are even attempting to measure its proficiency on a field as complex as rocket science. [2:11] If we look back, we will find that there are three main components that led to the current performance we are seeing from AI models: the transformer architecture, the scale, and in-context learning. Transformer in particular has been dominating the field of AI for the previous years. At the beginning, we started with natural language processing, and the architecture was very efficient that it took over the field of natural language processing within a very short amount of time. The transformer is a very efficient architecture that’s easy to scale, easy to parallelize, and relies on its heart at the attention mechanism, a technique that allows us to model interdependence between different components or different tokens in our input and output data. Transformers started off mostly in natural language processing, but slowly but surely, they made their way to pretty much any modality. So now we are seeing that models that are operating on images, on videos, on audio, and many other modalities are also using transformers. Five years later since their inception and transformers have surprisingly changed little compared to when they started despite so many attempts at producing better and more efficient variants of transformers, perhaps because of the gains were limited to certain use cases or perhaps because the gains did not persist at scale. Another potential reason is that maybe they made the architecture less universal, which has been one of its more—of its biggest advantages. [03:53] The next point is scale, and when we talk about scale, we really mean the amount of compute that’s being used to train the model, and that can be translated into either training bigger and bigger models with larger and larger number of parameters—and we have been seeing a steady increase of that over the previous years—but scale could also mean more data, using more data to train the model on larger and larger amounts of data. And we have seen different models over the previous few years taking different approaches in deciding how much data and how large the model is. But the consistent trend is that we have been scaling larger and larger and using more and more compute. Scale has also led to what is being called as “emerging capabilities.” And that’s one of the most interesting properties of scale that have been described over the previous year or so. By emerging capability, we mean that the model starts to show a certain ability that appears only when it reaches a critical size. Before that, the model is not demonstrating any of this ability at all. For example, let’s look at the figures here, and on the left-hand side, we see arithmetic. If we try to use language models to solve arithmetic word problems, up until a certain scale, they absolutely cannot solve the problem in any way, and they do not perform any better than random. But then at a certain critical point, we start seeing improved performance, and that performance just keeps getting better and better. And we have seen that at so many other tasks, as well, ranging from arithmetic to transliteration to multitask learning. [05:38] And perhaps one of the most exciting emerging capabilities of language models recently is their ability to in-context learn, which has been introducing a new paradigm for using these models. If we take a look back at how we have been practicing machine learning in general, with deep learning, you would start by choosing an architecture, a transformer or before that an RNN or CNN, and then you fully supervise train your model. You have a lot of labeled data, and you train your model based on that data. When we started getting into pre-trained models, we instead of training models from scratch, we actually start off with a pre-trained model and then fine-tune it still on a lot of fully supervised labeled data for the task at hand. But then with in-context learning, suddenly we can actually use the models out of the box. We can just use a pre-trained model and use a prompt in order to learn—in order to perform a new task without actually doing any learning. We can do that in zero-shot settings, meaning we do not provide any examples at all, just instructions or a description of what the task is, or in a few-shot setting, where we just provide a small handful number of examples to the model. For example, if we are interested in trying to do text classification, we can just—in this case sentiment analysis—we can just provide the text to the model and ask it to classify the text into either positive or negative. If the task is a little bit harder, we can provide few-shot samples, just a few examples of how do we want the model to classify things into, say, positive, negative, or neutral, and then ask the model to reason about a new piece of text, and it actually does pretty good at it. And it’s not only simple tasks like text classification. We can do translation or summarization and much more complex tasks with that paradigm. We can even try to do things like arithmetic where we try to give the model a word problem and ask it to come up with the answer. On the example we are showing right now, we did give the model just one sample to show it how we would solve a problem and then ask it to solve another problem. But in that particular case, the model actually failed. It did produce an answer, but it was not the correct answer. But then came the idea of chain-of-thought prompts, where instead of just showing the model the input and the output, we can actually also show it the steps it can take in order to get to that output from that particular input. In that case, we are just solving the arithmetic word problem step by step and showing an example of that to the model. When we do that, the models are not only able to produce the correct answer, but they are also able to walk us step by step through how they produced that answer. That mechanism is referred to as a chain-of-thought prompting, and it has been very prominently used in so many tasks and showing very superior performance on multiple tasks. It has been also used in many different ways, including in fine-tuning and training some of the models. The “pre-train and then fine-tune” paradigm have been established paradigm for years, since maybe the inception of BERT and similar pre-trained language models. But now you would see that there’s increased shift into using the models by prompting them instead of having to fine-tune them. That’s evident in a lot of practical usage of the models but even in the publications in the machine learning areas that have been using natural language processing tasks and switching into using prompting instead of using fine-tuning. In-context learning and prompting matters a lot because it’s actually changing the way we apply the models to new tasks. The ability of applying the models to new tasks out of the box without collecting additional data, without doing any additional training, is an amazing ability that increases the amount of tasks that can be applied—the models can be applied to and also reduces the amount of effort needed into building models with these tasks. [09:57] The performance has been also amazing by just providing only a few examples, and the tasks in this setting are being adapted to the model rather than the models being adapted to the tasks. If you think about the fine-tuning paradigm, what we did is that we already had the pre-trained model and we were fine-tuning it to adapt to the task. Now we are trying to frame the task in a way that’s more friendly to how the model is being trained so that the model can perform well on the task even without any fine-tuning. Finally, this allows the humans to interact with the models in their normal form of communication, in natural language. We can just give instructions describing the task that we want, and the model would perform the task. And that blurs the line between who is an ML and who is an ML developer because now anyone can just prompt and describe different tasks to the language model and get the language model to do a large number of tasks without having to have any training or any development involved. From GPT-3 to ChatGPT—a jump in generative capabilities [11:02–19:07] [11:02] Now looking back at the last three months or so, we have been seeing the field changing quite a bit and a tremendous amount of excitement happening around the release of the ChatGPT model. And if we think about the ChatGPT model as a generative model, we would see that there has been other generative models out there from the GPT family and other models, as well, that have been doing a decent job at text generation. So you can take one of these models, in this case GPT-3, and prompt it to the question asking it to explain what the foundational language model means and it would give you a pretty decent answer. You can ask the same question to ChatGPT and you’ll find that it’s able to provide a much better answer. It’s longer; it’s more thorough; it’s more structured. You can ask it to style it in different ways. You can ask it to simplify it in different ways. And all of these are capabilities that the previous generation of the models could not really do. If we look at how ChatGPT is described, the description lists different things, but it’s mostly optimized for dialogue, allowing the humans to interact in natural language. It’s much better at following instructions and so on and so forth. If we look at step by step about how this actually was manifested in the training, we will see from the description that looking at base models that ChatGPT was built on and other models before ChatGPT, that language model training was following a self-supervised pre-training approach, where we have a lot of unsupervised language, web-scale language, that we are training the models on, and the models in this particular case are trained with an autoregressive next word prediction approach. So we are looking at an input context, which is a sentence or a part of a sentence, and trying to predict the next word. But then over the last year or so, we have been seeing a shift where models are being trained not just on text but also on code. For example, GPT-3.5 models are trained on both text and code, and surprisingly, training the models on both text and codes improves their performance on many tasks that has nothing to do with code. On the figure we see right now, we see different models being compared on—models that were trained with code and models that were not trained with code—and we are seeing that the models that were trained with both text and code show better performance at following task instructions, show better performance at reasoning, compared to similar models that were trained on text only. So the training on code seems to be grounding the models in different ways, allowing them to learn a little bit more about how to reason, about how to look at structured relation between different parts of the text. [13:59] The second main difference is the idea of instruction tuning, which has been—what you have been seeing becoming more and more popular over different models over the last year, maybe starting with InstructGPT that introduced the idea of training the models on human-generated data. And this is a departure from the traditional self-supervised approach, where we have been only training the model on unsupervised, free, unstructured text. Now there’s an additional step in the training process that actually trains the models on human-generated data. The human-generated data takes the format of prompt and the response, and it’s trying to teach the model to respond in a particular way given a prompt, and this step of instruction tuning has been actually helping the models get a lot better, especially in zero-shot performance. And we see here that the instruction-tuned models tend to perform a lot better than their non-instruction–tuned counterpart, especially in zero-shot settings. And the last step of the training process introduces yet another human-generated data. In this case, we actually have different responses generated by the model and we have a human providing preferences to all these responses so in a sense ranking responses and choosing which response is better than other responses. This data is used to train a reward model that can then be used to actually train the main model with reinforcement learning. And this approach further aligns the model into responding in certain ways that correspond to the way the human has been providing the data. This notion of training the model with human data is very interesting, and it’s creating a lot of traction with many people thinking about the best technique to train on human data, the best form of human to collect, to train the model on, and it would probably help us improve the models even further in the near future. [16:02] Now with all these advances we have been seeing, the pace of innovation and the acceleration of the advances have been moving so fast that it has been very challenging in so many ways, but perhaps one of the most profound ways it has been challenging with is the notion of benchmarking, that traditionally research in machine learning has been very dependent on using very solid benchmarks on measuring the progress of different approaches. But the pace of innovation has been really challenging that recently. To understand how fast the progress has been, let’s look at this data coming from Hypermind, a forecasting company that uses crowd forecasting and has been doing that—tracking some of the AI benchmarks recently. The first benchmark is Massive Multitask Language Understanding benchmark, a large collection of language understanding tasks. In June of 2021, a forecast was made that in a year, by June 2022, we will get to around 57 performance on this task. But in reality, what happens is that by June 2022, we were at around 67 percent, and a couple of months later, we were at 75 percent, and we keep seeing more and more fast improvements after that. A second task is the MATH task, which is a collection of middle and high school math problems, and here the prediction was that in a year, we will get to around 13 percent. But in reality, we ended up going much more beyond that within one year, and we still see more and more advances happening at a faster-than-ever-expected pace. That rate of improvement is actually resulting in a lot of the benchmarks being saturated really fast. [17:51] If we look back at benchmarks like MNIST and Switchboard, it took the community 20-plus years in order to fully saturate these benchmarks. And that has been accelerating, accelerating to the point where now we see benchmarks being saturated in a year or less. In fact, many of the benchmarks are becoming obsolete to the point that only 66 percent of machine learning benchmarks have received more than three results at different time points, and many of them are solved or saturated soon after they are being released. And that actually motivated the community to come together with very large efforts to try to design benchmarks that are designed specifically to challenge large language models. In that particular case, with BIG-bench, more than 400 authors from over 100 institutions came together to create it. But even with such an elaborate effort, we are seeing very fast progress, and with large language models and chain-of-thought prompting that we discussed earlier, we are seeing that we are making very fast progress against the hardest tasks in BIG-bench, and in many of them, models are already performing better than humans right now. Everyday impact: Integrating foundation models and products [19:09–27:20] [19:09] The foundation models are not only getting better and better at benchmarks, but they are actually changing many products that we use every day. We mentioned code generation earlier, so let’s talk a little bit about Copilot. GitHub Copilot is a new experience that helps developers write code, and Copilot is very interesting in many perspectives. One is how fast it went from the model being created in research to how—to the point it made it as a product generally available in GitHub Copilot but also in how much value it has been generating. This study that was done by the Copilot GitHub team was looking at quantifying the value these models were providing to developers. And in the first part of the study, they asked different questions to the developers, trying to assess how useful the models are, and we see that 88 percent of the participants reported that they feel like they are much more productive when using Copilot than before, and they reported many other positive implications on their productivity, as well. But perhaps even more interesting, the study did a controlled study where there were two groups of developers trying to solve the same set of tasks. A group of them had access to Copilot, and the other group did not, and interestingly, the group that had access to Copilot not only finished the tasks at a higher success rate but also at a much more efficient rate. Overall, they were 55 percent more productive. Fifty-five percent more productivity in a coding scenario is an amazing progress that a lot of people would have been very surprised to think about a model like Copilot performing so fast with such value. [21:10] Now beyond code generation and text generation, another frontier where these models are starting to shine is when we start connecting them with external knowledge sources and external tools. Language models that have been optimized for dialogue have amazing language capabilities; they do really good at understanding language, at following instructions. They also do really well at synthesizing and generating answers. They are also conversational in nature and do store knowledge from the training data that they were trained on. But they do have a lot of limitations around reliability, factualness, staleness, access to more recent information that was not part of the training data, provenance, and so on. And that’s why connecting these models to external knowledge sources and tools could be super exciting. Let’s talk about, for example, connecting language models to search as we have seen recently with the new Bing. [22:14] If we take a look back years ago, there was many, many studies studying web search, studying tasks that people try to complete in web search scenarios. And many of these tasks were deemed as complex search tasks, tasks that are not navigational, as in trying to go to a particular website, or that are not simple informational tasks where you are trying to look up a fact that you can quickly get with one query but more complex tasks that involve multiple queries. Maybe you are planning a travel, maybe you are trying to buy a product, and as part of your research process, there are multifaceted queries that you would like to look at. There has been a lot of research understanding behavior with such tasks and how prevalent they are and how much time and effort people spend in order to perform them. And they typically involve spending a significant amount of time with the search engine, reading and synthesizing information from different sources with different queries. But with a new experience like the experience Bing is providing, we can actually take one of these queries and provide much more complex long queries to the search engine. And the search engine uses both search and the power of the language model to generate multiple queries, get the results of all of these queries, and synthesize a detailed answer back to the searcher. Not only that, but it can recommend additional searches and additional ways you could interact with the search engine in order to learn more. That has the potential of saving a lot of time and a lot of effort for many searchers in ing these complex search tasks in a much better way. Not only that, but there are some of these complex search tasks that are multistep in nature, where I would start with one query and then follow up with another query based on the information I get from the first query. Imagine that I am doing this search before the Super Bowl where I am trying to understand some comparisons, stats, between the two quarterbacks that are going to face each other, and I start with that query. What the search engine did in that particular case is that it actually started with a query where it was trying to identify who are the two quarterbacks that are going to be playing in the Super Bowl. And if I have done that as a human, I would have done that. I would have identified the teams and the two quarterbacks, and then maybe I would follow up with another query where I would actually search for the stats of the two quarterbacks I am asking about, and get that and actually synthesize the information maybe from different results and then get to the answer I am looking for. But with the new Bing experience, I can just issue the query and all of that is happening in the background. Different search queries are being generated, submitted to the search engine, recent results are getting collected, and a single answer is being synthesized and displayed, making me as a searcher much more productive and much more efficient. [25:21] The potential of LLM integrated—large language models integrated with search and other tools is very huge and can add much, much value to so many scenarios. But there are also a lot of challenges and a lot of opportunities and a lot of limitations that needs to be addressed. Reliability and safety are one of them; making the models more accurate; thinking about trust, provenance, and bias. experience and behavior and how the new experience would affect how the s are interacting with the search engine is another one, with new and different tasks or different interfaces or even different behavior models. Search has been a very well-studied experience, and we have very good understanding of how s interact with the search engine and very reliable behavior models to predict that. Changing this experience will require a lot of additional study there. Personalization and managing preferences and search history and so on and so forth has also been a very well-studied field in web search, and with new experiences like that, we have so many opportunities and thinking about things like personalization and experience again but also evaluation and what do metrics mean. How do we measure satisfaction? How do we understand good and bad abandonment? Good abandonment as in when people get satisfied with the result but they don’t have to click on anything on the search result page, and bad abandonment being the opposite of that. Thinking about loops, which has been playing a large part in improving search engines, and how can we apply them to new experiences and new scenarios. So while integrating language models with an experience like search and other tools and experiences is very exciting, it’s actually also creating so many opportunities for new research problems or for revisiting previous search problems that we had very good understanding for. Conclusion [27:21–28:37] [27:21] To conclude, we have been seeing incredible advancing with AI over the past couple of years. The progress has been accelerating and outpacing expectations in so many ways, and the advances are not only in of academic benchmarks and publications, but we are also seeing an explosion of applications that are changing the products that we use every day. However, we are really much closer to the beginning of a new era with AI than we are to the end state of AI capabilities. There are so many opportunities, and we will probably see a lot more advances and even more accelerated progress over the coming month and years. And there are so many challenges that remain and many new opportunities that are arising because of the state of where these models are. It’s a very exciting time for AI, and we are really looking forward to seeing the advances that will happen moving forward and to the applications that will result from these advances and how they will affect every one of us with the products we use every day. Thank you so much. [END] Show more Explore more Video Keynote with guests: Toward AI that empowers more people more of the time  Blog Reinventing search with a new AI-powered Microsoft Bing and Edge, your copilot for the web  Story AI at Scale  Collaboration Microsoft Turing Academic Program (MS-TAP)  GitHub Copilot  The post AI Explainer: Foundation models ​and the next era of AI appeared first on Microsoft Research.
Internet y tecnología 2 años
0
0
7
00:31
Azure AI milestone: New Neural Text-to-Speech models more closely mirror natural speech
Azure AI milestone: New Neural Text-to-Speech models more closely mirror natural speech
Neural Text-to-Speech—along with recent milestones in computer vision and question answering—is part of a larger Azure AI mission to provide relevant, meaningful AI solutions and services that work better for people because they better capture how people learn and work—with improved vision, knowledge understanding, and speech capabilities. At the center of these efforts is XYZ-code, a t representation of three cognitive attributes: monolingual text (X), audio or visual sensory signals (Y), and multilingual (Z). For more information about these efforts, read the XYZ-code blog post.  Neural Text-to-Speech (Neural TTS), a powerful speech synthesis capability of Azure Cognitive Services, enables developers to convert text to lifelike speech. It is used in voice assistant scenarios, content read aloud capabilities, accessibility tools, and more. Neural TTS has now reached a significant milestone in Azure, with a new generation of Neural TTS model called Uni-TTSv4, whose quality shows no significant difference from sentence-level natural speech recordings.   Microsoft debuted the original technology three years ago, with close to human-parity quality. This resulted in TTS audio that was more fluid, natural sounding, and better articulated. Since then, Neural TTS has been incorporated into Microsoft flagship products such as Edge Read Aloud, Immersive Reader, and Word Read Aloud. It’s also been adopted by many customers such as AT&T, Duolingo, Progressive, and more. s can choose from multiple pre-set voices or record and their own sample to create custom voices instead. Over 110 languages are ed, including a wide array of language variants, also known as locales.   The latest version of the model, Uni-TTSv4, is now shipping into production on a first set of eight voices (shown in the table below). We will continue to roll out the new model architecture to the remaining 110-plus languages and Custom Neural Voice in the coming milestone. Our s will automatically get significantly better-quality TTS through the Azure TTS API, Microsoft Office, and Edge browser.  Measuring TTS quality Text-to-speech quality is measured by the Mean Opinion Score (MOS), a widely recognized scoring method for speech quality evaluation. For MOS studies, participants rate speech characteristics for both recordings of peoples’ voices and TTS voices on a five-point scale. These characteristics include sound quality, pronunciation, speaking rate, and articulation. For any model improvement, we first conduct a side-by-side comparative MOS test (CMOS) with production models. Then, we do a blind MOS test on the held-out recording set (recordings not used in training) and the TTS-synthesized audio and measure the difference between the two MOS scores.  During research of the new model, Microsoft submitted the Uni-TTSv4 system to Blizzard Challenge 2021 under its code name, DelightfulTTS. Our paper, “DelightfulTTS: The Microsoft Speech Synthesis System for Blizzard Challenge 2021,” provides in-depth detail of our research and the results. The Blizzard Challenge is a well-known TTS benchmark organized by world-class experts in TTS fields, and it conducts large-scale MOS tests on multiple TTS systems with hundreds of listeners. Results from Blizzard Challenge 2021 demonstrate that the voice built with the new model shows no significant difference from natural speech on the common dataset. Microsoft research webinars Lectures from Microsoft researchers with live Q&A and on-demand viewing. today Measurement results for Uni-TTSv4 and comparison The MOS scores below are based on samples produced by the Uni-TTSv4 model under the constraints of real-time performance requirements. A Wilcoxon signed-rank test was used to determine whether the MOS scores differed significantly between the held-out recordings and TTS. A p-value less than 0.05 (typically ≤ 0.05) is statistically significant and a p-value higher than 0.05 (> 0.05) is not statistically significant. A positive CMOS number shows gain over production, which shows it is more highly preferred by people judging the voice in of naturalness.  Locale (voice)Human recording (MOS)Uni-TTSv4 (MOS)Wilcoxon p-valueCMOS vs PRODEn-US (Jenny) 4.33(±0.04) 4.29(±0.04) 0.266 +0.116 En-US (Sara)4.16(±0.05) 4.12 (±0.05)0.41 +0.129 Zh-CN (Xiaoxiao)4.54(±0.05) 4.51(±0.05) 0.44 +0.181 It-IT (Elsa) 4.59(±0.04) 4.58(±0.03) 0.34 +0.25 Ja-JP (Nanami) 4.44(±0.04) 4.37(±0.05) 0.053 +0.19 Ko-KR(Sun-hi) 4.24(±0.06) 4.15(±0.06) 0.11 +0.097 Es-ES (Alvaro) 4.36(±0.05) 4.33(±0.04) 0.312 +0.18 Es-MX (Dalia) 4.45 (±0.05) 4.39(±0.05) 0.103 +0.076  A comparison of human and Uni-TTSv4 audio samples  Listen to the recording and TTS samples below to hear the quality of the new model. Note that the recording is not part of the training set. These voices are updated to the new model in the Azure TTS online service. You can also try the demo with your own text. More voices will be upgraded to Uni-TTSv4 later. En-US (Jenny) The visualizations of the vocal quality continue in a quartet and octet. Human recording Uni-TTSv4 En-US (Sara) Like other visitors, he is a believer. Human recording Uni-TTSv4 Zh-CN (Xiaoxiao) 另外,也要规避当前的地缘局势风险,等待合适的时机介入。 Human recording Uni-TTSv4 It-IT (Elsa) La riunione del Consiglio di Federazione era prevista per ieri. Human recording Uni-TTSv4 Ja-JP (Nanami) 責任はどうなるのでしょうか? Human recording Uni-TTSv4 Ko-KR (Sun-hi) 그는 마지막으로 이번 앨범 활동 각오를 밝히며 인터뷰를 마쳤다 Human recording Uni-TTSv4 Es-ES (Alvaro) Al parecer, se trata de una operación vinculada con el tráfico de drogas. Human recording Uni-TTSv4 Es-MX (Dalia) Haber desempeñado el papel de Primera Dama no es una tarea sencilla. Human recording Uni-TTSv4 How Uni-TTSv4 works to better represent human speech Over the past 3 years, Microsoft has been improving its engine to make TTS that more closely aligns with human speech. While the typical Neural TTS quality of synthesized speech has been impressive, the perceived quality and naturalness still have space to improve compared to human speech recordings. We found this is particularly the case when people listen to TTS for a while. It is in the very subtle nuances, such as variations in tone or pitch, that people are able to tell whether a speech is generated by AI.  Why is it so hard for a TTS voice to reflect human vocal expression more closely? Human speech is usually rich and dynamic. With different emotions and in different contexts, a word is spoken differently. And in many languages this difference can be very subtle. The expressions of a TTS voice are modeled with various acoustic parameters. Currently it is not very efficient for those parameters to model all the coarse-grained and fine-grained details on the acoustic spectrum of human speech. TTS is also a typical one-to-many mapping problem where there could be multiple varying speech outputs (for example, pitch, duration, speaker, prosody, style, and others) for a given text input. Thus, modeling such variation information is important to improve the expressiveness and naturalness of synthesized speech.  To achieve these improvements in quality and naturalness, Uni-TTSv4 introduces two significant updates in acoustic modeling. In general, transformer models learn the global interaction while convolutions efficiently capture local correlations. First, there’s a new architecture with transformer and convolution blocks, which better model the local and global dependencies in the acoustic model. Second, we model variation information systematically from both explicit perspectives (speaker ID, language ID, pitch, and duration) and implicit perspectives (utterance-level and phoneme-level prosody). These perspectives use supervised and unsupervised learning respectively, which ensures end-to-end audio naturalness and expressiveness. This method achieves a good balance between model performance and controllability, as illustrated below: Figure 1: Acoustic model and vocoder diagram of Uni-TTSv4. First text input is encoded with text encoder, and then implicit and explicit information are added to hidden embeddings from text encoder, which is then used to predict the mel-spectogram with a spectrum decoder. Lastly, the vocoder is used to convert mel-spectogram into audio samples. To achieve better voice quality, the basic modelling block needs fundamental improvement. The global and local interactions are especially important for non-autoregressive TTS, considering it has a longer output sequence than machine translation or speech recognition in the decoder, and each frame in the decoder cannot see its history as the autoregressive model does. So, we designed a new modelling block which combines the best of transformer and convolution, where self-attention learns the global interaction while the convolutions efficiently capture the local correlations.  Figure 2: The improved conformer module. The first layer is a convolutional feed-forward layer; the second layer is a depth-wise convolutional layer; the third layer is a self-attention layer; and the last layer is also a convolutional feed-forward layer. Every sub-layer is followed by a layer norm. The new variance adaptor, based on FastSpeech2, introduces a hierarchical implicit information modelling pipeline from utterance-level prosody and phoneme-level prosody perspectives, together with the explicit information like duration, pitch, speaker ID, and language ID. Modeling these variations can effectively mitigate the one-to-many mapping problem and improves the expressiveness and fidelity of synthesized speech. Figure 3: Variance adaptor with explicit and implicit variation information modeling. First, explicit speaker ID and language ID, along with pitch information, are added to hidden embeddings from text encoder with lookup table. Then, implicit utterance-level and phoneme-level prosody vectors are predicted from text hidden. Finally, the hidden representation is expanded with predicted duration. We use our previously proposed HiFiNet—a new generation of Neural TTS vocoder—to convert spectrum into audio samples. Publication DelightfulTTS: The Microsoft Speech Synthesis System for Blizzard Challenge 2021 For more details of the above system, refer to the paper.  Working to advance AI with XYZ-code in a responsible way We are excited about the future of Neural TTS with human-centric and natural-sounding quality under the XYZ-Code AI framework. Microsoft is committed to the advancement and use of AI grounded in principles that put people first and benefit society. We are putting these Microsoft AI principles into practice throughout the company and strongly encourage developers to do the same. For guidance on deploying AI responsibly, visit Responsible use of AI with Cognitive Services.  Get started with Neural TTS in Azure Neural TTS in Azure offers over 270 neural voices across over 110 languages and locales. In addition, the capability enables organizations to create a unique brand voice in multiple languages and styles. To explore the capabilities of Neural TTS with some of its different voice offerings, try the demo. For more information:  Read our documentation. Check out our sample code. Check out the code of conduct for integrating Neural TTS into your apps. Acknowledgments The research behind Uni-TTSv4 was conducted by a team of researchers from across Microsoft, including Yanqing Liu, Zhihang Xu, Xu Tan, Bohan Li, Xiaoqiang Wang, Songze Wu, Jie Ding, Peter Pan, Cheng Wen, Gang Wang, Runnan Li, Jin Wu, Jinzhu Li, Xi Wang, Yan Deng, Jingzhou Yang, Lei He, Sheng Zhao, Tao Qin, Tie-Yan Liu, Frank Soong, Li Jiang, Xuedong Huang with the from all the Azure Speech and Cognitive Services team , Integrated Training Platform, and ONNX Runtime teams for making this great accomplishment possible. The post Azure AI milestone: New Neural Text-to-Speech models more closely mirror natural speech appeared first on Microsoft Research.
Internet y tecnología 3 años
0
0
5
00:04
Research at Microsoft 2021: Collaborating for real-world change
Research at Microsoft 2021: Collaborating for real-world change
Over the past 30 years, Microsoft Research has undergone a shift in how it approaches innovation, broadening its mission to include not only advancing the state of computing but also using technology to tackle some of the world’s most pressing challenges. That evolution has never been more prominent than it was during this past year. Recent events underscore the urgent need to address planet-scale problems. Fundamental advancements in science and technology have a crucial role to play in addressing ongoing societal challenges such as climate change, healthcare equity and access, supply chain logistics, sustainability, security and privacy, and the digital divide. Microsoft Research is increasing focus on these areas and others to help accelerate transformational change and build trust in technology as it evolves. However, these challenges are too large for any single organization to meet alone. They require broader and more diverse coalitions across the global science and technology community, including businesses, scholars, governments, nongovernmental organizations, and local communities. This year, Microsoft Research hosted the first-ever Microsoft Research Summit, a virtual event that embodied our aspiration to catalyze collaboration and innovation across traditional boundaries. The summit brought together experts from around the world—a mix of speakers from Microsoft and external organizations—to critically examine the way technology can increase understanding and further drive advancement; creativity and achievement; build a resilient, sustainable society; and open healthcare advances to all while maintaining ethical practices that put people first. This post explores just some of the work that’s been done this year by Microsoft Research, alongside its partners and collaborators, to drive real-world impact in critical areas, and our aspirations for further impact in the years to come. Leading the way for real-world impact Advancing human knowledge and foundational technologies Fundamental insights into technology and computing can inspire breakthroughs and new computing paradigms while helping to drive scientific discovery forward. In his plenary talk at Research Summit, Peter Lee, Corporate Vice President, Microsoft Research & Incubations, cited “The Usefulness of Useless Knowledge,” an essay published in Harper’s Magazine in 1939 by pioneering educator Abraham Flexner. Among other things, the essay stresses the role that curiosity and exploration play in game-changing technological leaps. It’s at this root of invention and innovation, Flexner argues, where patience and belief in shared knowledge is key. LAMBDA, one of this year’s first big announcements, shows how the Microsoft research community can make significant contributions to products and customers when given the time and freedom to follow their curiosities. In this case, the product was Microsoft Excel—a program that has benefitted from the efforts of research teams over time. The feature, which resulted from collaboration between of the Calc Intelligence and Excel teams, gives s the ability to define custom worksheet functions in Excel’s formula language, making the program Turing-complete, that is, allowing any computation to be written in the Excel formula language. Podcast: Advancing Excel as a programming language with Andy Gordon and Simon Peyton Jones To make networks in data centers more scalable to future needs, researchers in Optics for the Cloud explored how optical circuit switches could replace resource-heavy electrical switches at a network’s core. They demonstrated the system’s potential to switch between wavelengths at nanosecond speeds—a necessity for ing low-latency networks at the scale required—using a microcomb and semiconductor optical amplifiers. A research team moved the bar forward for DNA storage, introducing a proof-of-concept molecular controller in the form of a tiny DNA storage writing mechanism on a chip. The chip demonstrated the ability to pack DNA-synthesis spots three orders of magnitude more tightly than before and results in much higher DNA writing throughput than current systems. AI at Scale continued to gain momentum in 2021. With exponential growth this year, large artificial intelligence (AI) models trained using deep learning are one example of fundamental science where applications in the real world are becoming more ubiquitous. Microsoft Research teams were recognized for advancing the state of the art and developing new multilingual capabilities to build more inclusive language technologies using AI as well as pushing the boundaries of natural language processing (NLP) and computer vision. In June 2021, Microsoft Research’s LReasoner system set a new standard for logical reasoning ability among pretrained language models. It reached the top of the official leaderboard for ReCLor, a dataset built using questions from the LSAT and GMAT, two standardized issions tests. Microsoft Turing’s T-ULRv5 achieved breakthrough performance on the XTREME leaderboard in September. A few weeks later, Microsoft Turing’s model T-NLRv5 reached the top of the SuperGLUE and GLUE leaderboards. Ultimately, these benchmarks and respective leaderboards help to measure progress toward creating AI that better understands language and better converses with people within and across language boundaries. To understand how quickly these advances are happening, one need only look to Megatron-Turing NLG, the language generation model with 530 billion parameters trained to convergence—a collaboration between Microsoft and NVIDIA. Trend of sizes of state-of-the-art NLP models over time To train the Megatron-Turing model, DeepSpeed and NVIDIA Megatron-LM paired up to create an efficient and scalable 3D-parallel system harnessing data parallelism, pipeline parallelism, and tensor slicing–based parallelism. Beyond this achievement, the DeepSpeed optimization library added a number of features and tools this year, including DeepSpeed Inference, its first foray into improving inference latency and cost using multiple graphic processing units (GPUs). The team also introduced DeepSpeed MoE, ing five types of parallelism and training 8x larger models when compared with existing systems. Zero-Infinity allowed for scaling of large model training from one to thousands of GPUs, furthering its effort to democratize model training for everyone. These large AI models are impressive in their own right, but it’s what they’re able to do to people and democratize innovation that makes them especially valuable. Advances in language technologies resulted in the expansion of Microsoft translation and spelling-correction technologies into over 100 languages, breaking down language barriers in products like Microsoft Bing and Microsoft Translator. The Microsoft Turing Team also introduced Turing Bletchley, a 2.5 billion-parameter Universal Image Language Representation model (T-UILR) that can perform image-language tasks in 94 languages. Meanwhile, researchers from Microsoft Research Asia worked on bridging the gap between computer-language and computer-vision modeling. In October, of the Visual Computing group won the Conference on Computer Vision (ICCV) 2021 award for their paper on the Swin Transformer. This vision transformer sures the state of the art with its high performance, flexibility, and linear complexity, making it compatible with a broad range of vision tasks. With this work, the research team hopes to inspire additional research in this area that will ultimately enable t modeling between the computer vision and language domains. Research teams in the same lab examined the potential for transformers to find success beyond language and vision, demonstrating the neural network architecture can be applied to graph representation learning. With their standard transformer architecture Graphormer, the researchers achieved state-of-the-art performance in the KDD Cup 2021 graph-level prediction track and topped popular graph-level prediction leaderboards. Amplifying human creativity and achievement People are multidimensional, pursuing goals and tasks across different areas of their lives, under a variety of circumstances. Microsoft researchers are dedicated to not only helping individuals accomplish more in their personal, professional, and creative lives, but also to helping them feel more confident doing so. Over the past year and a half, researchers and product teams throughout Microsoft have responded swiftly to workplace challenges and opportunities arising from the pandemic. ing organizations in executing hybrid work models, they explored technology as an intermediary between people who are physically in the room and those who are not, with some researchers presenting their findings in a hybrid meeting prototype during Research Summit. Researchers investigated remote and hybrid work from a variety of angles—from longitudinal studies on multitasking behavior to workplace communication insights gleaned using network machine learning—to understand where technology needs to grow to help people thrive under these fluid working conditions. Previous and ongoing work in this area is captured by the New Future of Work Initiative and the annual Work Trend Index. As ML techniques and approaches advance, so does the potential for applications to empower individuals in the workplace and beyond does, too. Research teams are leveraging few-shot learning to help AI that is truly more customizable to the individual with the ORBIT dataset and benchmark. The dataset and benchmark are inspired by a real-world application for people who are blind or have low vision called teachable object recognizers. The dataset strives to reflect the variance within object types and input quality that recognition systems will encounter in the day-to-day, while the benchmark challenges models to identify objects for single s from a few, high-variation examples. Earlier in the year, at the CHI 2021 Conference on Human Factors in Computing Systems, researchers presented tools and learnings guided by a changing definition of accessibility, one that focuses on helping individuals rise above limitations imposed by a world built to accommodate the majority to realize their full capabilities. Also, of the Enable Group explored the continuing evolution of Soundscape, an app that uses 3D spatial audio to elevate s’ perception of an environment they’re navigating. Technology has the ability to not only empower at an individual level but also at a systemic level, emboldening people with tools and to pursue and achieve large-scale positive change in their communities and society as a whole. Microsoft Research India’s Center for Societal Impact through Cloud and Artificial Intelligence (SCAI) was established to extend the lab’s research and technologies to create impact across domains such as healthcare and sustainability. SCAI collaborates with social enterprises to augment their ability to make a difference, as has been the case with Respirer Living Sciences. This year saw SCAI and the climate science startup integrate Microsoft Research India’s Dependable IoT solution with Respirer’s PM2.5 sensors for monitoring kits that are providing real-time air-quality data that is easier to access and understand. Project Amplify, a collaboration between the India lab, Microsoft for Startups, and Accenture, serves as another channel for SCAI to share its resources, helping startups committed to addressing societal and sustainability challenges. In its first year (2020–2021), the initiative ed work in aquaculture, agriculture, and mental health. Meanwhile, the Research for Industry (RFI) initiative connects researchers and industry partners to help a variety of domains—from retail and financial services to energy and entertainment—operate in a dynamic world. The value of such collaboration can be seen by the work already being done in agriculture, where individual farmers are able to preview a suite of technologies developed to bring together low-bandwidth wireless technology, micro-climate prediction using deep learning, and data analysis in Microsoft Azure to improve crop returns and sustainability. Fostering a resilient and sustainable society In May 2021, Microsoft Research introduced a new societal resilience research agenda. Inspired in part by the rapid development of COVID-19 vaccines and a rising tide of global challenges, it advocates a “reset” between science and society. As we continue to pursue foundational academic advances, we must also accelerate research that addresses societal challenges.  Societal Resilience deploys open, adaptable technologies to enable community-oriented, collective problem solving. It drives new tools to help domain experts translate real-world data into evidence (see figure below). This means collaborating across traditional boundaries and engaging people who live where the challenges exist. A new video series looks more closely at the changing nature of innovation and discovering new ways to build resilience. The Synthetic Data Showcase tool helps nontechnical domain experts use data to respond to human trafficking and exploitation. This tool, also demoed at Research Summit, uses Power BI to the CTDC global dataset on victims of trafficking while safeguarding victims’ privacy. Microsoft is a founding member of Tech Against Trafficking (TAT)—a coalition fighting human trafficking with technology. In this example, we use Power BI to privacy-preserving exploration of the anonymous datasets generated by our Synthetic Data Showcase tool. Having selected the records of victims in the age range 9–17, we can see the distributions of multiple additional attributes contained in these records: the year the victim was ed, gender, country of citizenship and exploitation, and type of labor or sexual exploitation. All of the counts in these distributions are dynamically generated by Power BI filtering and aggregating records of the synthetic dataset. These “estimated” counts are compared on the right with “actual” counts precomputed over the sensitive data, showing that the synthetic dataset accurately captures the structure of the sensitive data for the selected age range. For these victims aged 9–17, the association with “typeOfLabourOther” indicates a potential need to expand the data schema to more targeted policy design tackling forced labor of children. Environmental sustainability is a key focus area for Microsoft Research. We our customers’ efforts to reduce carbon emissions, including our work with Project Zerix, which combines biotechnology, chemistry, and materials science with computer science and engineering to develop more environmentally sustainable materials for the IT industry.  Project Eclipse is an example of local collaboration toward building sustainable and resilient cities. It’s the largest real-time, hyperlocal air-quality sensing network in a North American city. The Urban Innovation team worked with local partners in Chicago to deploy over 100 low-cost air pollution sensors across the city. The team provided several updates this year, including a new video showing how the system works, plus a demo and related presentation at Research Summit.  ing a healthy global society Technology is driving amazing progress in human health, exemplified by the unprecedented development of testing, vaccines, and treatments for COVID-19. It’s important to make those treatments and therapies available as broadly as possible. Microsoft Research s inclusive and equitable technologies and studies that improve scientific discovery, along with better, more equitable delivery of health care of people everywhere. In 2021, as the pandemic raged, it hit some populations harder than others, including people with limited access to or experience with technology. To lower barriers to health information, Microsoft Research developed the Covid-19 Vaccine Eligibility Bot to help people understand their eligibility to receive COVID vaccinations. The bot is accessible across a range of communication channels and in local languages to serve non-English speakers. Separately, a new partnership including Broad Institute of MIT and Harvard, Verily, and Microsoft will provide cloud, data and AI technology, and access to its global network of more than 168,000 health and life science partners, to help researchers interpret an unprecedented amount of biomedical data to advance the treatment of human diseases through the open-source platform Terra. As with any health-related technology, it’s important that new medical AI applications adhere to privacy regulations that safeguard sensitive data. Microsoft Research India developed a framework for secure, privacy-preserving, and AI-enabled medical imaging inference using CrypTFlow2, a state-of-the-art end-to-end compiler allowing cryptographically secure two-party computation (2PC) protocols. CrypTFlow2 may allow developers without cryptography experience to build efficient and scalable multi-party computation (MPC) protocols for inference tasks, dramatically improving health providers’ ability to process and analyze sensitive data while respecting privacy. Deep learning and open-source strategies can improve cancer radiotherapy workflows and care. But the learning models are not easily accessible to researchers and care providers. This webinar provides an update on Project InnerEye, which aims to democratize AI for medical image analysis and empower health professionals to build medical imaging AI models. Microsoft Research continued to research and develop new technologies to improve healthcare and access for all. Two examples from 2021 include: A new paper “Exploiting structured data for learning contagious diseases under incomplete testing,” published at the 2021 International Conference on Machine Learning (ICML), which explores using algorithms to anticipate the spread of infectious diseases. A study exploring the symptoms of Parkinson’s disease to better understand their progression and improve patient management and clinical trial design. The resulting predictive model discovered non-sequential, overlapping disease progression trajectories, ing the use of non-deterministic disease progression models, and suggesting static subtype assignment might be ineffective at capturing the full spectrum of Parkinson’s disease progression. Ensuring that technology is trustworthy and beneficial to everyone Fully realizing the value of technology requires the trust of those it’s intended to help. And trust is earned, which is why developing AI responsibly is a key tenet of the Microsoft mission. Researchers at the company are guided by the principles of fairness, reliability and safety, inclusiveness, and transparency, among others, in their pursuit of advancement, and they build and share resources and tools to incorporate those principles into research and development. Those tools include the Responsible AI (RAI) Toolbox and the Human AI eXperience (HAX) Toolkit. Combining error analysis, model interpretability, fairness, counterfactual example analysis, and causal analysis tools, the RAI Toolbox provides practitioners with the means to understand model behavior, identify and address issues, and real-world decision-making, while HAX provides actionable resources for prioritizing the safety and needs of people throughout the development of human-AI experiences. Understanding the benefits and harms of language models has been of particular interest to the broader research community. In discussing the topic at Research Summit, researchers explored limitations of current task framing in building tech that meets real needs and called for interdisciplinary methods to more effectively study real-world impact. Microsoft researchers also covered a variety of other topics related to cultivating trust in AI systems, including executing responsible AI and identifying, assessing, and mitigating harms in AI systems, as part of the Microsoft Research Webinar series. Microsoft Research also launched a monthly lecture series examining the relationship between technology and the perception of race and its ramifications (for more information, see the section below). RESTler—the first stateful REST API fuzzer—can help efficiently find security and reliability bugs in cloud services. RESTler analyzes a Swagger/OpenAPI specification and produces a fuzzing grammar that contains information about requests and their dependencies. RESTler only fuzzes a request if all its dependent resources have been successfully created—this enables RESTler to achieve deeper coverage out of the box. RESTler also offers a pluggable model for checking security properties. RESTler is open source and available at its GitHub repository. New complexity in AI systems and an increased reliance on using data to develop and train those systems brings with it increased requirements for keeping those systems secure. This year, researchers at Microsoft started the Privacy Preserving Machine Learning (PPML) initiative to address the need for preserving individual data privacy throughout the ML pipeline. Differential privacy (DP) is one technique that plays an important role in this initiative. Microsoft Research is pushing the boundaries in DP research with the overarching goal of providing Microsoft customers with the best possible productivity experiences through improved ML models for NLP while providing highly robust privacy protections. Cryptography and privacy researchers are committed to protecting the confidentiality of people’s data, and in January, they collaborated with the Microsoft Edge product team to introduce Monitor for Microsoft Edge, a security feature that notifies s if any of their saved s has been found in a third-party breach. The underlying technology uses homomorphic encryption, a technique pioneered at Microsoft Research, which ensures the privacy and security of s’ s. Watch For, a technology incubated in Microsoft Research, has been helping to create safe and inclusive digital spaces for Xbox and other platforms. The real-time media content analytics platform traces its beginnings back to the 2017 internal hackathon hosted by Microsoft and has garnered attention as the engine behind HypeZone, gaming’s version of NFL RedZone. Watch For is now officially a part of Xbox Family, Trust, and Safety and will continue to help content moderation and online safety throughout Microsoft. Engaging with the broader research community and looking to the future Microsoft Research values its ties to the academic community, and it continued to research and learning through its fellowship and grant opportunities in 2021. The Microsoft Research PhD Fellowship was awarded to 40+ recipients around the world in 2021. The fellowship seeks to empower the next generation of exceptional computing talent in order to build a stronger and inclusive computing-related research community. The Microsoft Research Dissertation Grant provides research funding for doctoral students who are underrepresented in the field of computing, with the goal of increasing the pipeline of diverse talent receiving advanced degrees in computing. In their work, this year’s grant recipients explore technology applications for accessibility, healthcare, entrepreneurship, digital literacy, and other areas. The Microsoft Research Faculty Fellowship recognizes innovative, promising new faculty, whose work and talent identifies them as emerging leaders in their fields. The 2021 Faculty Fellows’ work ranges from investigating new methods in cryptography to applications of signal processing and ML in biomedicine. This year, Microsoft Research made changes that enabled our growth and created new opportunities. In January, Ashley Llorens ed Microsoft Research as VP, Distinguished Scientist, and Managing Director of Microsoft Research Outreach. He’s helping Microsoft Research achieve its mission of amplifying the impact of research at Microsoft and advance the cause of science and technology research worldwide. Before ing Microsoft, Llorens served as founding chief of the Intelligent Systems Center at the Johns Hopkins Applied Physics Laboratory. In July, Microsoft Research announced the addition of a new satellite research lab in Amsterdam. Building on work being pursued at Microsoft Research Cambridge and Microsoft Research Asia in a larger research effort at Microsoft, the Amsterdam lab will focus on molecular simulation using ML. Distinguished scientist and renowned ML researcher Max Welling will lead the lab, bringing a deep background in physics and quantum computing to the role. By using compute power to run physical simulations, he and the lab’s growing team hope to help Microsoft further explore the application of ML to molecular science and uncover its tremendous potential in tackling some of the most important challenges facing society, including climate change, drug discovery, and understanding biology to help treat disease. To confront challenges through research and technology, we’re sometimes required to engage in new ways. This year, Microsoft Research started the Race and Technology lecture series, a virtual speaker series designed to foster understanding of, and inspire continued research into, how the perception of race influences technology and vice versa through the work of distinguished academics and domain experts. This series continues through June 2022. Race and Technology: A Research Lecture Series features 14 distinguished scholars and domain experts from a diverse range of research areas and disciplines. From top left: Dr. Sareeta Amrute, Dr. Kim TallBear, Dr. Charlton McIlwain, Dr. Ruha Benjamin, Dr. Lisa Nakamura, Dr. Simone Browne, and Dr. André Brock. From bottom left: Dr. Sohini Ramachandran, Dr. C. Brandon Ogbunu, Dr. Kishonna L. Gray, Dr. Desmond Upton Patton, Merisa Heu-Weller, J.D., Dr. Denae Ford Robinson, and Dr. A. Nicki Washington. In 2021, Microsoft Research broadened its engagement with the larger global research community, and the events we hosted and attended this year provided new and enriching opportunities to engage with our community of researchers. Highlights include the ACM Special Interest Group on Data Communication (SIGCOMM) 2021 in August, where Microsoft was a gold sponsor, and the 35th Annual Conference on Neural Information Processing System (NeurIPS 2021) in December. Visit our event page for the full list of events and conferences in which Microsoft participated. 2021 Awards Over the years, the scientific community has recognized the outstanding and pioneering work done by Microsoft researchers. Here are some highlights of the awards received in 2021: Susan Dumais elected into ACM SIGIR Academy Abi Sellen elected a Fellow of the Royal Society Ranveer Chandra included in Newsweek’s inaugural list of America’s Greatest Disruptors Sébastien Bubeck awarded Outstanding Paper Award at NeurIPS 2021 Neeraj Kayal awarded Infosys 2021 Prize for Mathematical Sciences Explore the index of awards recognizing Microsoft researchers’ contributions in 2021 on our News and Awards page. For 30 years, Microsoft Research has invested in rigorous scientific research and ambitious long-term thinking. We have made a lot of progress on both foundational and real-world challenges, but our work is not done. In the coming year, we’ll continue to build on the foundation we’ve developed and focus on creating solutions that drive long-term real-world impact, and ultimately help to create a more resilient, sustainable, and healthy global society. We look forward to the breakthroughs that can make that happen. Hear from generations of Microsoft researchers as they reflect on the past and look ahead to the future at Microsoft Research. Explore the 30th Anniversary Generations of Inspirational and Impactful Research series to learn more. To stay up to date on all things research at Microsoft, follow our blog and subscribe to our newsletter and the Microsoft Research Podcast. You can also follow us on Facebook, Twitter, YouTube, and Instagram. The post Research at Microsoft 2021: Collaborating for real-world change appeared first on Microsoft Research.
Internet y tecnología 3 años
0
0
5
40:14
The Science of Successful Organizational Change: How Leaders Set Strategy, Change Behavior, and Create an Agile Culture
The Science of Successful Organizational Change: How Leaders Set Strategy, Change Behavior, and Create an Agile Culture
Why most of what you read about change management is nonsense... Is it time to euthanize Change Management, and replace the concept with Change-Agile Businesses and Change Leadership? Why? More importantly, how? Turbulent environments demand constant change, but the mindset, skills, and behaviors taught to business leaders are unhelpful and sometimes flatly misleading. What is more, many high-profile approaches to change do not help: they are based on untested belief systems, unreliable methods, and psychological myth. In The Science of Successful Organizational Change, Paul Gibbons offers the first blueprint for change for that fully reflects the newest advances in mindfulness, behavioral economics, sociology, and complexity theory. The Science of Organizational Change identifies dozens of change management myths, bad models, and unhelpful metaphors, replacing some with twenty-first century research and revealing gaps where research needs to be done. Paul Gibbons links the origins of theories about change to the history of ideas and suggests that the human sciences will provide real breakthroughs in our understanding of people in the twenty-first century. For example, change fundamentally entails risk, yet little is written for business people about how breakthroughs in the psychology of risk can help change leaders. Change fundamentally involves changing people's minds, yet the most recent research shows that provision of facts may strengthen resistance. Starting with a rigorous and evidence-based understanding of what makes people in organizations tick, he presents a complete framework for organizing your company around successful change. With case studies from Google, IBM, Shell, British Airways, British Petroleum, HSBC, and Morgan Stanley, Gibbons goes deeper and broader than any previous discussion of the subject.
Internet y tecnología 9 años
4
0
49
01:02:19
Negotiating the Nonnegotiable: How to Resolve Your Most Emotionally Charged Conflicts
Negotiating the Nonnegotiable: How to Resolve Your Most Emotionally Charged Conflicts
Before you get into your next conflict, read Negotiating the Nonnegotiable. It is not just "another book on conflict resolution," but a crucial step-by-step guide to resolve life's most emotionally challenging conflicts - whether between spouses, a parent and child, a boss and an employee, or rival communities or nations. These conflicts can feel nonnegotiable because they threaten your identity and trigger what Shapiro calls the Tribes Effect, a divisive mind-set that pits you against the other side. Once you fall prey to this mind-set, even a trivial argument with a family member or colleague can mushroom into an emotional uproar. Shapiro offers a powerful way out, drawing on his pioneering research and global fieldwork in consulting for everyone from heads of state to business leaders, embattled marital couples to families in crisis. And he also shares his insights from negotiating with three of the world's toughest negotiators - his three young sons. This is a must read to improve your professional and personal relationships.
Internet y tecnología 9 años
0
0
8
01:02:57
Disrupt Yourself: Putting the Power of Disruptive Innovation to Work
Disrupt Yourself: Putting the Power of Disruptive Innovation to Work
Companies don't disrupt. People do. Pursuing a disruptive course isn't just a nice thing to do, there's a compelling business case: the odds of success are 6x higher and the revenue opportunity is 20x greater. The most fundamental unit of disruption is the individual. The best way to drive corporate innovation is through personal disruption. Because the cycle of disruption is non-linear, the S-curve that we typically use is reimagined to gauge how quickly a new product or service will be adopted. There are seven accelerants that can speed the progress these include bracing constraints and battling entitlement. By harnessing the power of personal disruption, we can move from stuck to unstuck, and create value for our companies where it didn't before exist.
Internet y tecnología 9 años
0
0
5
48:47
Women in Tech: Take Your Career to the Next Level with Practical Advice and Inspiring Stories
Women in Tech: Take Your Career to the Next Level with Practical Advice and Inspiring Stories
Women who are considering getting in to tech or taking their tech careers to the next level face are looking for practical advice and inspiring stories and they often have similar questions: What are the secrets of salary negotiation? And what is the best format for tech resumes? How do you ace the interview? How do you compare contracting and salaried tech work? The secrets of mentorship will be discussed, along with pointers on starting your own company.
Internet y tecnología 9 años
0
0
5
52:17
X: The Experience When Business Meets Design
X: The Experience When Business Meets Design
In an always-on world where everyone is connected to information and also one another, customer experience is your brand. And, without defining experiences, brands become victim to whatever people feel and share. Why are great products no longer good enough to win with customers and why are creative marketing and delightful customer service not enough to succeed? In X, we learn why the future of business is experiential and how to create and cultivate meaningful experiences. This isn't your ordinary business book. Its aesthetic is meant to evoke emotion while also providing new perspective and insights to help you win the hearts and minds of your customers. And, the design of this book, along with what fills its pages, is done using the principles shared within.
Internet y tecnología 9 años
0
0
7
58:28
The New ABCs of Research: Achieving Breakthrough Collaborations
The New ABCs of Research: Achieving Breakthrough Collaborations
The problems we face in the 21st century require innovative thinking from all of us. Be it students, academics, business researchers of government policy makers. Hopes for improving our healthcare, food supply, community safety and environmental sustainability depend on the pervasive application of research solutions. The research heroes who take on the immense problems of our time face bigger than ever challenges, but if they adopt potent guiding principles and effective research lifecycle strategies, they can produce the advances that will enhance the lives of many people. These inspirational research leaders will break free from traditional thinking, disciplinary boundaries, and narrow aspirations. They will be bold innovators and engaged collaborators, who are ready to lead, yet open to new ideas, self-confident, yet empathetic to others. In this book, Ben Shneiderman recognizes the unbounded nature of human creativity, the multiplicative power of teamwork, and the catalytic effects of innovation. He reports on the growing number of initiatives to promote more integrated approaches to research so as to promote the expansion of these efforts. It is meant as a guide to students and junior researchers, as well as a manifesto for senior researchers and policy makers, challenging widely-held beliefs about how applied innovations evolve and how basic breakthroughs are made, and to help plotting the course towards tomorrow's great advancements.
Internet y tecnología 9 años
0
0
7
01:17:22
Simple Sabotage: A Modern Field Manual for Detecting and Rooting Out Everyday Behaviors That Undermine Your Workplace
Simple Sabotage: A Modern Field Manual for Detecting and Rooting Out Everyday Behaviors That Undermine Your Workplace
Inspired by the Simple Sabotage Field Manual released by the Office of Strategic Services in 1944 to train European resistors, this is the essential handbook to help stamp out unintentional sabotage in any working group, from major corporations to volunteer PTA committees. In 1944, the Office of Strategic Services (OSS) - the predecessor of today's CIA - issued the Simple Sabotage Field Manual that detailed sabotage techniques designed to demoralize the enemy. One section focused on eight incredibly subtle - and devastatingly destructive - tactics for sabotaging the decision-making processes of organizations. While the manual was written decades ago, these sabotage tactics thrive undetected in organizations today: - Insist on doing everything through channels. - Make speeches. Talk as frequently as possible and at great length. - Refer all matters to committees. - Bring up irrelevant issues as frequently as possible. Haggle over precise wordings of communications. - Refer back to matters already decided upon and attempt to question the advisability of that decision. - Advocate caution and urge fellow-conferees to avoid haste that might result in embarrassments or difficulties later on. - Be worried about the propriety of any decision. Everyone has been faced with someone who has used these tactics, even when they have meant well. Filled with proven strategies and techniques, this brief, clever book outlines the counter-sabotage measures to detect and reduce the impact of these eight classic sabotage tactics to improve productivity, spur creativity, and engender better collegial relationships.
Internet y tecnología 9 años
0
0
5
59:05
Almost: 12 Electric Months Chasing a Silicon Valley Dream
Almost: 12 Electric Months Chasing a Silicon Valley Dream
In Silicon Valley, people routinely dream of changing the world. Some do so. Many more almost do. Almost. It is such a Silicon Valley word. This is the story and lessons learned from 12 electric months in the life of a San Francisco startup that seemed on the verge of becoming a household name and being bought by Apple. Neither happened. And we all learned some hard lessons as a result. Almost... the word hurts the soul. So much time and effort and money into the void. And it keeps happening again and again, this amazing effort that seems part of the DNA of Silicon Valley. Why? What is it about the place that inspires people to swing for the fences? What is Silicon Valley really like? Almost is a fascinating 12-month snapshot inside of one company that almost changed the world.
Internet y tecnología 9 años
0
0
12
50:09
Stretch: How to Future-Proof Yourself for Tomorrow's Workplace
Stretch: How to Future-Proof Yourself for Tomorrow's Workplace
If you are like other professionals, your biggest worry is becoming obsolete at work. Shifting technologies, fierce competition among corporations, and recruitment occurring on a global level would give anyone concern. To remain relevant in spite of change, you need to know how to: -Learn in any situation -Open your thinking to a world beyond where you are now -Connect to the people who can help you make your future happen -Seek experiences that will prepare you for tomorrow -Stay motivated through the ups and downs of a career so you can bounce forward Stretch: How to Future Proof Yourself for Tomorrow's Workplace offers five practices to help you start, enhance, and lengthen your career by anticipating the needs of tomorrow's work environment. Don't become obsolete. Instead, stretch to achieve your potential.
Internet y tecnología 9 años
0
0
7
01:08:07
Persuadable: How Great Leaders Change Their Minds to Change the World
Persuadable: How Great Leaders Change Their Minds to Change the World
As a leader, changing your mind has always been perceived as a weakness. Not anymore. In a world that's changing faster than ever, successful leaders realize that a genuine willingness to change their own minds is the ultimate competitive advantage. Drawing on evidence from social science, history, politics, and more, business consultant Al Pittampalli reveals why confidence, consistency, and conviction, are increasingly becoming liabilities - while humility, inconsistency, and radical open-mindedness are powerful leadership assets.
Internet y tecnología 9 años
1
0
13
56:15
Theo Chocolate: Recipes & Sweet Secrets from Seattle's Favorite Chocolate Maker Featuring 75 Recipes Both Sweet &...
Theo Chocolate: Recipes & Sweet Secrets from Seattle's Favorite Chocolate Maker Featuring 75 Recipes Both Sweet &...
Who doesn’t love chocolate? Hear the fascinating story of how North America's first organic and Fair Trade chocolate factory came to be in Seattle, as well as the ion of Theo Chocolate's mission and how the chocolate is made.
Internet y tecnología 9 años
0
0
5
42:53
All the Birds in the Sky: A Science Fiction Novel
All the Birds in the Sky: A Science Fiction Novel
In this deeply magical tale of life, love, apocalypse, and time travel, childhood friends Patricia Delfine and Laurence Armstead don't expect to see each other again, after parting ways under mysterious circumstances during high school. Now they're adults, living in San Francisco, and the planet is falling apart. Laurence is an engineering genius who's working with a tech group that aims to avert a global climate disaster. Patricia is a graduate of a hidden academy for the world's magically gifted and works with other magicians to secretly repair the world's every-growing ailments. Little do they realize that something bigger than either of them, something begun years ago in their youth, is determined to bring them together—to either save the world, or plunge it into a new dark ages.
Internet y tecnología 9 años
0
0
7
47:24
The Geography of Genius: A Search for the World's Most Creative Places, from Ancient Athens to Silicon Valley
The Geography of Genius: A Search for the World's Most Creative Places, from Ancient Athens to Silicon Valley
Where does genius come from? And what is the connection between our surroundings and our most innovative ideas? In this exploration of the history of places, like Vienna of 1900, Renaissance Florence, ancient Athens, Song Dynasty Hangzhou, and Silicon Valley, we learn how certain urban settings are conducive to ingenuity. Walk in the footsteps (by visiting their stomping grounds) of Socrates, Michelangelo, and Leonardo da Vinci.
Internet y tecnología 9 años
0
0
6
01:01:37
Better Than Before
Better Than Before
Think of habits as the invisible architecture of everyday life. It takes work to make a habit, but once that habit is set, we can harness the energy of habits to build happier, stronger, more productive lives. So if habits are a key to change, then what we really need to know is: How do we change our habits?
Internet y tecnología 9 años
1
0
7
55:47
One Second Ahead: Enhance Your Performance at Work with Mindfulness
One Second Ahead: Enhance Your Performance at Work with Mindfulness
Researchers have found that the harried pace of modern office life is taking its toll on productivity, employee engagement, creativity and well-being. Faced with a relentless flood of information and distractions, our brains try to process everything at once - increasing our stress, decreasing our effectiveness and negatively impacting our performance. What can we do to break the cycle of being constantly under pressure, always-on, overloaded with information and in environments filled with distractions? Is it possible to train the brain to respond differently to today's constant pressures by using mindfulness that will truly take a brief moment out of our day? It is, and Jacqueline will tell us how
Internet y tecnología 9 años
0
0
16
58:52
The Game Believes In You: How Digital Play Can Make Our Kids Smarter
The Game Believes In You: How Digital Play Can Make Our Kids Smarter
What if schools, from the wealthiest suburban preschool to the grittiest urban high school, thrummed with the sounds of deep immersion? More and more people believe that can happen - with the aid of video games. A small group of visionaries who, for the past 40 years, have been pushing to get game controllers into the hands of learners argue that games do truly "believe in you." Games focus, inspire and reassure people in ways that many teachers can't. Games give people a chance to learn at their own pace, take risks, cultivate deeper understanding, fail and want to try again-right away-and ultimately, succeed in ways that too often elude them in school.
Internet y tecnología 9 años
0
0
5
01:02:27
Más de Microsoft Ver más
Build your first Windows Store app (HD) - Channel
Build your first Windows Store app (HD) - Channel This multi-part video series walks developers through building their first Windows Store app, based on the step-by-step tutorials at dev.windows.com. 1. JavaScript tutorial2. VB/C# tutorial Actualizado
Subscribe!  (HD) - Channel 9
Subscribe! (HD) - Channel 9 Subscribe! is a video blog about Messaging, Middleware, Architecture, and all sort of other interesting topics around building larger and more sophisticated solutions than your average website on Windows Azure and Windows Server. Your host and, mostly, monologist is Clemens Vasters from the Windows Azure Service Bus team who puts this blog together in his studio on his island of solitude in . Follow Clemens on Twitter @clemensv Actualizado
Windows Store apps for Absolute Beginners with C#
Windows Store apps for Absolute Beginners with C# Ready to make money and share your great Windows 8 app ideas with the world? Not sure where to start? Start here! Over the course of 34 lessons, our friend Bob Tabor from www.LearnVisualStudio.net will teach you the fundamentals of Windows Store app development by walking you through building the Contoso Cookbook Hands On Labs. Bob provides plenty of commentary, insight and encouragement to help you understand the basics of page layout with XAML, binding to collections of data and working with the features of Windows 8 exposed through the Windows Runtime like the Search and Share charms, tiles and notifications, webcam and much, much more! By the end of this series, you should be well on your way to becoming the "next big thing". the entire series source code For more Absolute Beginner series click here. Actualizado
También te puede gustar Ver más
monos estocásticos
monos estocásticos monos estocásticos es un podcast sobre inteligencia artificial presentado por Antonio Ortiz (@antonello) y Matías S. Zavia (@matiass).  Sacamos un episodio nuevo cada jueves. Puedes seguirnos en YouTube, LinkedIn y X. Más enlaces en cuonda.com/monos-estocasticos/links Hacemos todo lo que los monos estocásticos saben hacer: coser secuencias de formas lingüísticas que hemos observado en nuestros vastos datos de entrenamiento según la información probabilística de cómo se combinan. Actualizado
Lunaticoin
Lunaticoin Entusiasta Bitcoin | Conecto con personas de habla hispana con perfil propio dentro del mundo #bitcoin y comparto su valor | Colaborador en @EstudioBitcoin Actualizado
Hablando Crypto
Hablando Crypto ¿Te interesan las criptomonedas? A nosotros también. Somos Óscar y Cristian. Después de más de 5 años jugueteando con las criptomonedas os explicamos nuestras historias. También hablamos sobre como vemos el crypto-mundo y hacia donde creemos que irá. Actualizado
Ir a Internet y tecnología