Magna Concursos

Foram encontradas 60 questões.

4063680 Ano: 2026
Disciplina: Matemática
Banca: FUVEST
Orgão: USP
Provas:
Durante o teste de uma prensa industrial, a altura h(t), em metros, atingida por um componente projetado verticalmente é descrita por uma função do 2º grau. Observou-se que:

• O componente parte do solo no instante t = 0; • Retorna ao solo no instante t = 6; • A altura máxima atingida é 9 metros.

Sabendo que a função pode ser escrita na forma fatorada h(t) = a⋅t(t − 6), a expressão correta da função
 

Provas

Questão presente nas seguintes provas
4063679 Ano: 2026
Disciplina: Matemática
Banca: FUVEST
Orgão: USP
Provas:
Em um biotério, a área ocupada por uma colônia de bactérias numa placa de Petri cresce segundo a função exponencial: A(t) = 5 ⋅ 2t onde:
• A(t) = representa a área ocupada (em cm²), • t = representa o tempo em horas.

Considerando que a placa comporta no máximo 160 cm², o tempo mínimo necessário para que a colônia atinja exatamente essa área é:
 

Provas

Questão presente nas seguintes provas
4063678 Ano: 2026
Disciplina: Matemática
Banca: FUVEST
Orgão: USP
Provas:
Em um biotério, será construída uma nova área retangular para acomodação de roedores. Por normas técnicas de bem-estar animal, o recinto deve possuir área de 48 m². Sabe-se que o comprimento excede a largura em 2 metros. Para atender às normas de circulação dos técnicos, será instalada uma proteção ao redor de todo o recinto. Considerando essas informações, assinale a alternativa que apresenta corretamente:

• a largura do recinto; • o perímetro da área construída.
 

Provas

Questão presente nas seguintes provas
4063677 Ano: 2026
Disciplina: Matemática Financeira
Banca: FUVEST
Orgão: USP
Provas:
Em uma oficina de manutenção de máquinas industriais, uma empresa realiza um investimento para a aquisição e modernização de equipamentos mecânicos. O valor investido foi de R$ 20.000,00, aplicado a uma taxa de 5% ao mês, sob o regime de juros compostos, durante 3 meses. Ao final desse período, o montante acumulado desse investimento será de, aproximadamente,
 

Provas

Questão presente nas seguintes provas
4063676 Ano: 2026
Disciplina: Matemática
Banca: FUVEST
Orgão: USP
Provas:
Em um laboratório de testes industriais, um equipamento passa por ciclos sucessivos de operação. Observa-se que, a cada ciclo, o número de componentes ativos em funcionamento é o dobro do número verificado no ciclo anterior. No primeiro ciclo, o equipamento opera com  3 componentes ativos. Mantido esse comportamento, o número de componentes ativos no 6º ciclo será:
 

Provas

Questão presente nas seguintes provas
4063675 Ano: 2026
Disciplina: Inglês (Língua Inglesa)
Banca: FUVEST
Orgão: USP
Provas:
Building Trustworthy AI in Government: Enablers, Guardrails, and Engagement 
Enunciado 4540875-1
    Governments are starting to use AI in areas like public services, tax work, and disaster response. When it works well, AI can help people get answers faster, spot problems earlier, and support better decisions. As a result, AI can improve productivity, responsiveness, and accountability in government.
    However, many public AI projects stay in small pilots. This happens because governments often lack skills, good data, modern digital systems, and clear ways to measure impact. These gaps can also increase risk aversion, so teams avoid innovation even when the potential benefits are high.
    The OECD proposes a simple way to understand “trustworthy AI in government”: a framework with three connected pillars. In the figure, the goal is in the centre. Around it, the three pillars explain what governments must build and do, so they can reach the public value goals shown on the outer ring (productivity, responsiveness and accountability).
     Enablers are the foundations. They include strong governance, quality data, and digital infrastructure, as well as skills and talent in the civil service. They also require purposeful investment, smart public procurement, and partnerships with non-government actors, so that AI systems can be built and used reliably.
    Guardrails are the safety systems that guide AI use. They include ethics and risk management, transparency duties, and monitoring and oversight bodies that can check results over time. They can also be non-binding guidance or binding laws and policies, along with enforcement measures. Tools like impact assessment and auditing help keep these guardrails practical. Still, guardrails should be proportionate: not every rule fits every use case, or progress may stop.
    Engagement means involving the people who are affected. This includes working across levels of government, across policy areas, and with the broader ecosystem (civil society, businesses and researchers). It also includes citizens and civil servants, and sometimes collaboration across borders. Engagement pushes governments to design user-centred systems, listen to concerns, and make necessary adjustments.
     The main message is that trust is “unlocked” by the right mix. If enablers are weak, AI cannot scale. If guardrails are missing, harms grow. If engagement is shallow, solutions may look efficient but feel unfair, and trust can fall.
(Adapted from oecd.org on February 22, 2026)
Considere o trecho “Guardrails are the safety systems that guide AI use.” (5º parágrafo). Sem alterar o sentido original do texto, a palavra “guide” pode ser substituída por
 

Provas

Questão presente nas seguintes provas
4063674 Ano: 2026
Disciplina: Inglês (Língua Inglesa)
Banca: FUVEST
Orgão: USP
Provas:
Building Trustworthy AI in Government: Enablers, Guardrails, and Engagement 
Enunciado 4540874-1
    Governments are starting to use AI in areas like public services, tax work, and disaster response. When it works well, AI can help people get answers faster, spot problems earlier, and support better decisions. As a result, AI can improve productivity, responsiveness, and accountability in government.
    However, many public AI projects stay in small pilots. This happens because governments often lack skills, good data, modern digital systems, and clear ways to measure impact. These gaps can also increase risk aversion, so teams avoid innovation even when the potential benefits are high.
    The OECD proposes a simple way to understand “trustworthy AI in government”: a framework with three connected pillars. In the figure, the goal is in the centre. Around it, the three pillars explain what governments must build and do, so they can reach the public value goals shown on the outer ring (productivity, responsiveness and accountability).
     Enablers are the foundations. They include strong governance, quality data, and digital infrastructure, as well as skills and talent in the civil service. They also require purposeful investment, smart public procurement, and partnerships with non-government actors, so that AI systems can be built and used reliably.
    Guardrails are the safety systems that guide AI use. They include ethics and risk management, transparency duties, and monitoring and oversight bodies that can check results over time. They can also be non-binding guidance or binding laws and policies, along with enforcement measures. Tools like impact assessment and auditing help keep these guardrails practical. Still, guardrails should be proportionate: not every rule fits every use case, or progress may stop.
    Engagement means involving the people who are affected. This includes working across levels of government, across policy areas, and with the broader ecosystem (civil society, businesses and researchers). It also includes citizens and civil servants, and sometimes collaboration across borders. Engagement pushes governments to design user-centred systems, listen to concerns, and make necessary adjustments.
     The main message is that trust is “unlocked” by the right mix. If enablers are weak, AI cannot scale. If guardrails are missing, harms grow. If engagement is shallow, solutions may look efficient but feel unfair, and trust can fall.
(Adapted from oecd.org on February 22, 2026)
Considere o trecho “These gaps can also increase risk aversion, so teams avoid innovation even when the potential benefits are high.” (2º parágrafo)
A expressão “risk aversion” pode ser corretamente compreendida como:¬
 

Provas

Questão presente nas seguintes provas
4063673 Ano: 2026
Disciplina: Inglês (Língua Inglesa)
Banca: FUVEST
Orgão: USP
Provas:
Building Trustworthy AI in Government: Enablers, Guardrails, and Engagement 
Enunciado 4540873-1
    Governments are starting to use AI in areas like public services, tax work, and disaster response. When it works well, AI can help people get answers faster, spot problems earlier, and support better decisions. As a result, AI can improve productivity, responsiveness, and accountability in government.
    However, many public AI projects stay in small pilots. This happens because governments often lack skills, good data, modern digital systems, and clear ways to measure impact. These gaps can also increase risk aversion, so teams avoid innovation even when the potential benefits are high.
    The OECD proposes a simple way to understand “trustworthy AI in government”: a framework with three connected pillars. In the figure, the goal is in the centre. Around it, the three pillars explain what governments must build and do, so they can reach the public value goals shown on the outer ring (productivity, responsiveness and accountability).
     Enablers are the foundations. They include strong governance, quality data, and digital infrastructure, as well as skills and talent in the civil service. They also require purposeful investment, smart public procurement, and partnerships with non-government actors, so that AI systems can be built and used reliably.
    Guardrails are the safety systems that guide AI use. They include ethics and risk management, transparency duties, and monitoring and oversight bodies that can check results over time. They can also be non-binding guidance or binding laws and policies, along with enforcement measures. Tools like impact assessment and auditing help keep these guardrails practical. Still, guardrails should be proportionate: not every rule fits every use case, or progress may stop.
    Engagement means involving the people who are affected. This includes working across levels of government, across policy areas, and with the broader ecosystem (civil society, businesses and researchers). It also includes citizens and civil servants, and sometimes collaboration across borders. Engagement pushes governments to design user-centred systems, listen to concerns, and make necessary adjustments.
     The main message is that trust is “unlocked” by the right mix. If enablers are weak, AI cannot scale. If guardrails are missing, harms grow. If engagement is shallow, solutions may look efficient but feel unfair, and trust can fall.
(Adapted from oecd.org on February 22, 2026)
No 5º parágrafo, ao afirmar que “Still, guardrails should be proportionate: not every rule fits every use case, or progress may stop. ”, o texto defende que as regras para o uso da IA devem
 

Provas

Questão presente nas seguintes provas
4063672 Ano: 2026
Disciplina: Inglês (Língua Inglesa)
Banca: FUVEST
Orgão: USP
Provas:
Building Trustworthy AI in Government: Enablers, Guardrails, and Engagement 
Enunciado 4540872-1
    Governments are starting to use AI in areas like public services, tax work, and disaster response. When it works well, AI can help people get answers faster, spot problems earlier, and support better decisions. As a result, AI can improve productivity, responsiveness, and accountability in government.
    However, many public AI projects stay in small pilots. This happens because governments often lack skills, good data, modern digital systems, and clear ways to measure impact. These gaps can also increase risk aversion, so teams avoid innovation even when the potential benefits are high.
    The OECD proposes a simple way to understand “trustworthy AI in government”: a framework with three connected pillars. In the figure, the goal is in the centre. Around it, the three pillars explain what governments must build and do, so they can reach the public value goals shown on the outer ring (productivity, responsiveness and accountability).
     Enablers are the foundations. They include strong governance, quality data, and digital infrastructure, as well as skills and talent in the civil service. They also require purposeful investment, smart public procurement, and partnerships with non-government actors, so that AI systems can be built and used reliably.
    Guardrails are the safety systems that guide AI use. They include ethics and risk management, transparency duties, and monitoring and oversight bodies that can check results over time. They can also be non-binding guidance or binding laws and policies, along with enforcement measures. Tools like impact assessment and auditing help keep these guardrails practical. Still, guardrails should be proportionate: not every rule fits every use case, or progress may stop.
    Engagement means involving the people who are affected. This includes working across levels of government, across policy areas, and with the broader ecosystem (civil society, businesses and researchers). It also includes citizens and civil servants, and sometimes collaboration across borders. Engagement pushes governments to design user-centred systems, listen to concerns, and make necessary adjustments.
     The main message is that trust is “unlocked” by the right mix. If enablers are weak, AI cannot scale. If guardrails are missing, harms grow. If engagement is shallow, solutions may look efficient but feel unfair, and trust can fall.
(Adapted from oecd.org on February 22, 2026)
No 5º parágrafo do texto, a palavra “guardrails” é usada em sentido figurado. Ela se refere, mais diretamente, a:
 

Provas

Questão presente nas seguintes provas
4063671 Ano: 2026
Disciplina: Inglês (Língua Inglesa)
Banca: FUVEST
Orgão: USP
Provas:
Building Trustworthy AI in Government: Enablers, Guardrails, and Engagement 
Enunciado 4540871-1
    Governments are starting to use AI in areas like public services, tax work, and disaster response. When it works well, AI can help people get answers faster, spot problems earlier, and support better decisions. As a result, AI can improve productivity, responsiveness, and accountability in government.
    However, many public AI projects stay in small pilots. This happens because governments often lack skills, good data, modern digital systems, and clear ways to measure impact. These gaps can also increase risk aversion, so teams avoid innovation even when the potential benefits are high.
    The OECD proposes a simple way to understand “trustworthy AI in government”: a framework with three connected pillars. In the figure, the goal is in the centre. Around it, the three pillars explain what governments must build and do, so they can reach the public value goals shown on the outer ring (productivity, responsiveness and accountability).
     Enablers are the foundations. They include strong governance, quality data, and digital infrastructure, as well as skills and talent in the civil service. They also require purposeful investment, smart public procurement, and partnerships with non-government actors, so that AI systems can be built and used reliably.
    Guardrails are the safety systems that guide AI use. They include ethics and risk management, transparency duties, and monitoring and oversight bodies that can check results over time. They can also be non-binding guidance or binding laws and policies, along with enforcement measures. Tools like impact assessment and auditing help keep these guardrails practical. Still, guardrails should be proportionate: not every rule fits every use case, or progress may stop.
    Engagement means involving the people who are affected. This includes working across levels of government, across policy areas, and with the broader ecosystem (civil society, businesses and researchers). It also includes citizens and civil servants, and sometimes collaboration across borders. Engagement pushes governments to design user-centred systems, listen to concerns, and make necessary adjustments.
     The main message is that trust is “unlocked” by the right mix. If enablers are weak, AI cannot scale. If guardrails are missing, harms grow. If engagement is shallow, solutions may look efficient but feel unfair, and trust can fall.
(Adapted from oecd.org on February 22, 2026)
No trecho “These gaps can also increase risk aversion”, presente no segundo parágrafo, a expressão “these gaps” refere-se, principalmente,
 

Provas

Questão presente nas seguintes provas