[Reconnaissance faciale] For Bernard Ourghanlian, Microsoft must come out of a role that is not his

[ad_1]

The Digital Factory: Microsoft puts forward the concept of "hybrid cloud 2.0". What characterizes these new services compared to the hybrid cloud as we know it?

Bernard Ourghanlian : The hybrid has been around for a long time. We have always promoted this notion of hybridization between what is traditionally called "private cloud" and "public cloud". What is new in this strategy is that it embraces the cloud but also the periphery called "edge computing". This periphery is all that will allow both to connect intelligent objects but also to ensure that there is no need to import the data of all these objects towards a point central which would be the cloud.

The example of autonomous cars is very evocative. Toyota estimates that by 2025 the amount of data generated by autonomous cars will be around two exabytes per month. It is therefore illusory to think that we will not only be able to store but above all to have enough time to transfer this potentially very distant data, knowing that we cannot go faster than the speed of light.

This limit is a constraint but at the same time, it allows us to reflect on this notion of hybridity in a much broader view, which provides answers to problems that we do not know how to deal with. For example on the autonomous car, we can not imagine that we will exchange data in the cloud to recognize a child who runs after his ball before making the decision to stop the car.

It is a return to decentralization, in a way?

In fact, there is a need to bring down artificial intelligence models in the objects themselves, but also in environments which are a little richer in computing and processing capacity, which are not even "center" of the cloud but which are on the periphery. It is this model which is called "hybrid 2.0" and which aims to capture this dimension. For us, this implies making available to our customers hardware capacities to be able to process these quantities of information at the periphery. These are environments called "Azure Stack Hub" or "Azure Stack Edge" which will contain, through the use of specialized processors, extremely significant AI processing capacities. They will be able to run models which very often have been trained in the cloud with potentially huge computing capacities.

It is also a certain number of software bricks which will make it possible not to carry out calculations which were very centralized in a cloud vision but much more distributed calculations with the possibility that they are shared between all the calculation units which are on the outskirts. In this context, there are scenarios that will be permanently smart health connected. But there are also scenarios that will not be permanently smart health connected, either for reasons of digital sovereignty, security, politics or technical constraints. The idea behind this is both to support processing methods that are very diverse technically and extremely varied use cases.

All of this implies that customers must review their infrastructure in depth to comply with this new approach unique to Microsoft …

We announced that a few months ago our ability to set up application development environments which are all available in open source. They can therefore be used on the Microsoft cloud, but also on the clouds of our competitors. Our desire is to cover the entire spectrum, as well the private, public cloud, the periphery, and even the objects themselves so as not to have to rewrite the applications when we change the environment. In parallel, we also announced a system administration environment called "Azure Arc", which is deliberately multi-cloud. Its objective is not only to cover Microsoft but also the cloud offers of Amazon and Google. Indeed, we do not realize that most of our customers today do not want to put "all their eggs in one basket".

Behind this 2.0 hybridism, there is a desire to be open to meet the needs of customers who are more and more diverse. The idea is not to force people to adopt a centralized or decentralized model. Each model has its virtues, knowing that there are models that make a lot of sense for this or that scenario. Typically, to train models on huge amounts of data, you need to have a rather centralized environment. However, there are lots of environments where it is not desirable, such as smart health connected cars. A vision of the world which would be purely centralized, it would be neither in conformity with the reality of the uses, nor with the technical reality.

On December 2, 2024, the "Health Data Hub" (catalog for healthcare professionals and researchers, Editor's note) has caused many controversies, including the fact that Microsoft is in charge of its hosting. How do you respond to these fears?

Today there is the need to have a way to store health data in large volumes. When France sought to host this data, those responsible naturally asked themselves the question of which law was applicable. On this subject, there is a regulation which was defined by ASIP Health which precisely defines what is a host of data of health (HDS). It turns out that Microsoft was the first company to be officially approved. When a choice had to be made, he naturally turned to the only solution that was available on the market at the time.

Regarding the issue of sovereignty, it is a very broad question. As a citizen, I wonder what this sovereignty means. To be very frank, I can admit without any difficulty that the data concerning national defense or nuclear power plants must be managed in a very specific way because they are dangerous, as much for health data … I am not saying that I want my data to be made available to everyone, but is it really something that comes under the sovereignty of a country? In my opinion, the real question concerns the sovereignty of individuals, and in any case everyone must consent to the use of their health data.

Isn't there a problem of confidence today with regard to technology giants?

On the subject of trust, I do not have a magic wand to say "trust" and reassure people. The data is stored in "HDS" certified data centers. Today, in reality and without any controversy, it is the only applicable legal framework. But is it enough to be confirmed in the legal framework for people to trust you? I don't have the audacity to answer "yes". Honestly, we did everything we needed to. In terms of the design of the "Health Data Hub", a lot has been done in terms of encryption. This ensures that on our side, if we gave anyone access to this data, it would be encrypted and we would not have the key. There is no known way today to break the encryption keys we use.

And there is one last point that we really need to get across. Each company has its business model. At Microsoft, we don't do business with customer data. Never. We are not there to make money with people's data. We are already fortunate to have been chosen to host the Health Data Hub. It is certainly not to then trade it. It is really fundamental. When we look at our sources of income, that's not what we make money with.

The lack of regulation of artificial intelligence is generating more and more concerns, especially for the use of facial recognition technologies. Do you think it is time to act? Aren't you afraid that regulations will slow innovation down?

We fully support regulation of facial recognition. This is a subject that concerns us greatly. I happen to be the representative of Microsoft in France on our ethics committee. We are constantly debating this subject, which is more complex than we think. We cannot just reason with our culture.

I sometimes have discussions with our Chinese colleagues on this subject and they do not have the same culture as us at all. Are they right and we wrong or the opposite? It is not for us to judge. I think that ethics is a reflection of the culture and values ​​of individuals and companies. I do not allow myself to criticize the vision of others. The Chinese, for example, lived in an environment where it was normal for "the common good" to set up collective surveillance in large volumes with a whole series of drifts which are extremely problematic.

Isn't it also the responsibility of the companies that develop these technologies to ensure that they are not misused?

Today, rules are needed. We contextualize the use of facial recognition to usage scenarios, but also in relation to the geographic area and the political regime. From the UN's point of view, there are a number of countries that are known to not respect human rights. By default, our position is to say that in these countries, it is prohibited. But it is not our responsibility as a private company to go and discern human rights patents in States. The call for regulation is mainly to get us out of this kind of role that is not ours. We take over state rights. Microsoft is not a state, we have never been elected and we have no legitimacy to interfere on these issues. Each country must regulate this subject properly. Then we will follow the law in those countries.

Is the solution to temporarily ban facial recognition? I do not think so. The example ofIndia is extremely disturbing. This country has facial recognition devices in major cities. They are used to recognize abandoned children. There have been many cases where children have been found. This is the positive side of the implementation. The ban has advantages, of course, but also a number of disadvantages. When we prohibit, we no longer learn. However, we must continue to learn too. Do not think that we have the answers to all the questions.

[ad_2]

We will be happy to hear your thoughts

Leave a reply

B Well Mart
Logo
Shopping cart