I work in IT with end users who average 45-50 years old. I can tell you where that message came from.
We've got users working with sensitive private information who are starting to use tools like ChatGPT and Gemini because their college kids told them they're helpful for checking writing, or work better than search engines. Our users work remotely and if they decide to take a picture of what they're working on and feed it into an OCR, there's not much we can do to stop it. So we need to provide a sanctioned tool that at least gives us some controls over how data is handled and stored (not that Microsoft provides anything vaguely resembling perfect data transit and analysis into Copilot) so we can try to protect sensitive information and our end users as much as possible. Are we happy about having to deploy AI tools? Not even a little bit. I'd be happier if we all just collectively rolled back a few years. But our options are sanctioned tool and policy or failing audits and here we are.
My take is a little different. If people really want AI they should pay additional cost and AI should be a addon feature on your PC.
Additional Costs -> more powerful AI centered Chips ( as least as possible power consumption ) which can use much much more local fast memory ( imho 256 GB should be in the long term the minimum ).
That enables local AI's to be the solution for privacy and control of long term costs and i guess in 99% local AI's will do the job fine enough.
Sadly noone will be on our side cause they want to put AI Usage / PC Usage overall behind a monthly subscription in the long term.
Right now we are just as always in the phase of making people depending on a technology.
Remote workers have webcams. Set things up so if it seems a camera phone aimed at the screen it takes a shot, sends the event to management they check it and decide wether to fire people for violating policy.
That directive came from them but they didn’t want to issue it. I guarentee you the were told by higher ups to say that.
They claimed it's for "safety" as "microsoft is to be trusted more than OpenAI" oh the irony!
also, not really higher ups to dictate our IT what to use, that makes it even better...
I work in IT with end users who average 45-50 years old. I can tell you where that message came from.
We've got users working with sensitive private information who are starting to use tools like ChatGPT and Gemini because their college kids told them they're helpful for checking writing, or work better than search engines. Our users work remotely and if they decide to take a picture of what they're working on and feed it into an OCR, there's not much we can do to stop it. So we need to provide a sanctioned tool that at least gives us some controls over how data is handled and stored (not that Microsoft provides anything vaguely resembling perfect data transit and analysis into Copilot) so we can try to protect sensitive information and our end users as much as possible. Are we happy about having to deploy AI tools? Not even a little bit. I'd be happier if we all just collectively rolled back a few years. But our options are sanctioned tool and policy or failing audits and here we are.
@Septian @pet1t
My take is a little different. If people really want AI they should pay additional cost and AI should be a addon feature on your PC.
Additional Costs -> more powerful AI centered Chips ( as least as possible power consumption ) which can use much much more local fast memory ( imho 256 GB should be in the long term the minimum ).
That enables local AI's to be the solution for privacy and control of long term costs and i guess in 99% local AI's will do the job fine enough.
Sadly noone will be on our side cause they want to put AI Usage / PC Usage overall behind a monthly subscription in the long term.
Right now we are just as always in the phase of making people depending on a technology.
Remote workers have webcams. Set things up so if it seems a camera phone aimed at the screen it takes a shot, sends the event to management they check it and decide wether to fire people for violating policy.
Yeah I don't know about the ethics of that...
MS reps tightening the screw.
Management most likely read their horoscopes in some business blog.