Data Security and Ethics in AI-Powered Sneaker Resale Market: A Case Study of PUSHAS







DATA4300

Data Security and Ethics

PUSHAS













Student Name:

Student ID:





Introduction

This report aims to know the impact of the latest technology on businesses such as In today’s fast-growing sneaker resale market, several companies such as PUSHAS are using AI technologies to get ahead of the competition. In this market, PUSHAS incorporates artificial intelligence to address several activities such as price-setting and customer segmentation as well as inventory control of products. Due to data analysis, AI contributes to a better understanding of consumer needs with appropriate stocks and even market trends that may improve PUSHAS's general activity. Besides increasing efficiency these capabilities allow them to develop a better shopping experience and boost their client’s satisfaction and devotion. It was established that AI technologies offer great potential for PUSHAS to improve organizational performance. This means that they can use ML algorithms to set product prices based on the prevailing market price. AI technical support chatbots increase the pace and quality of responses to customer conversational requests with predictive analytics, PUSHAS can forecast the demand for its products and avoid the risks of stocking too many products which will increase profitability. Customers enter their data into the system or it collects the information from various sources, the utilization of AI implies the issue of data protection.




Ethical considerations accompanying with PUSHAS

The incorporation of AI technologies in the operations of PUSHAS leads to subsequent cybersecurity, privacy, and ethical risks. The company, clients, and industries need to address this to avoid the violation of the law and vulnerability to rigorous penalties. One of the main issues is data security since most AI systems work with large amounts of personal and transactional information because for business activities they have to collect them from the customers. The analysis section for PUSHAS is related to the proper application of cybersecurity to the kind of information processed by this company which is crucial to avoid unauthorized access that results in financial and reputational losses. Privacy compliance concerns are also important here mainly because many AI applications rely on large datasets for operation (Gupta et al., 2020, p. 24748). For example, microlevel analysis may involve using customer browsing and purchasing histories to improve the accuracy of AI-generated recommendations that are used by PUSHAS. This has to be performed legally which includes but is not limited to GDPR in the European Union by participant’s data that has to be obtained on consent, processed only the necessary data and the participant has the right to access their data.

Ethical Risk in Utilizing AI

Some ethical risks in the use of Artificial Intelligence include biases and discrimination along with accountability and transparent risks. This means that when an AI algorithm is trained on a biased data set then it will likely produce unfair pricing or customer profiling. Several biases will always exist in any AI system and PUSHAS must periodically review its systems to deactivate them. There is always a transparency issue associated with AI systems where customers cannot appreciate why specific choices are made (Vössing et al., 2022, p. 880). To manage these risks, PUSHAS requires a high code of standards regulatory compliance as well as ethical matters to do with artificial intelligence. This includes knowing the legal requirements as they are formulated, applying measures of data reidentification, and making sure the use of AI is impartial, truthful, and explainable.



Code of Conduct Generated by ChatGPT

Introduction

PUSHAS is one of the biggest market players in the sneaker resale industry and artificial intelligence optimizes its operation, service quality, and market positioning. This Code of Conduct lays down guidelines to follow to support allowing responsible usage of AI by PUSHAS while also assuring it is an organization that follows a strict ethical code. The meaning of this is that all employees and stakeholders who work in programs so related have to ensure that they do the following to enhance the safety of AI programs. Ensuring that AI innovation is safe for use will require the execution of the following measures such as:

Information Security and Fortification

Customer Consent

PUSHAS will ensure that customers give their enhanced consent before their data is collected or used for artificial intelligence or AI processing (Bae et al., 2022, p. 3).

Information Contraction

To reduce privacy threats only necessary information will be gathered so PUSHAS will not be an exemption to gather a lot of information of the customers in a way that their privacy will be at risk.

Safe Loading

They will ensure that all collected personal data will be properly encrypted and all physical and electronic access rights will be appropriately restricted. It is proposed that security audits will be performed from time to time to identify and remedially mitigate the threats as soon as possible.

Interpretability

Client Responsiveness

PUSHAS will guarantee that end consumers are informed when artificial intelligence is being deployed most especially in aspects such as recommendation systems and variable pricing (Panay et al., 2021, p. 3).

Reasonable AI

Every time it is possible AI decisions that may impact customers, products, or services, including recommendations or changing the price will be explained. This makes customers aware of why they are receiving certain outcomes that have been influenced by AI.

Exposed Statement

PUSHAS will keep open lines of communication to let customers seek information on the operations of AI allowing customers to have a look at AI decision-making.


Favouritism and Impartiality

Consistent Assessments

PUSHAS will routinely scan AI models for existing prejudices that can affect decisions made by business organizations. A powerful audit will be made for fairly treating various sectors of customers (Schmidt & Trautmann, 2023, p. 201).

Complete Information

PUSHAS will employ different datasets to train and develop its AI models to minimize bias in its results and create equality in its service provision.

Impartial

The future uses of AI will be built upon avoiding perceptive people depending on attributes such as race and gender status.

The factor of Submission to Governing requirements

GDPR and Information Safety Rules

PUSHAS will be GDPR and any other code of lawful data protection regime that applies. This is done in compliance with customer rights on data access, deletion, or modification as well as compliance with various processing standards.

Constant Submission Orders

The reviews will create awareness of new regulations and changes in the existing legal requirements in AI activities. Source documents will include compliance reports to serve as evidence of compliance with such laws and policies.

Employer Exercise

Employees within the PUSHAS organization shall receive training with respect to regulatory compliance in areas of the law that particularly relate to their work.

Principled Routine of AI

Ethical Information Procedure

All data will only be used where it will be relevant and in line with the spirit and intent of both PUSHAS and the customers.

Escaping of Destruction

PUSHAS will not apply AI in a manner that can be unsafe to the customers or any stakeholders. The AI systems will not only used for the purpose of assisting the customers but will do so in a way that does not violate the customer's privacy or manipulate them.

Practical Safety and Destruction

Data Safety Procedures

Traditional control measures are expected of PUSHAS as applied to prevent malicious attacks on AI systems such as encryptions, firewalls, and access control measures (Stefana et al., 2024, p. 289).

Occasion Observation

It will be clear how one will report cases involving AI security incidents such as data leaks or system crashes. Staff are to be proactively reporting any matters of security concern to the compliance department.

Response and Justification

As for any action plan following an incident in PUSHAS, there are several actions that need to be taken such as minimizing risks, informing customers, and avoiding such circumstances in the future.

Responsibility and Misunderstanding

Characters and Duties

Various positions that are important such as Data Ethics Officer and AI Development Team for these activities will be established to ensure the implementation and regulation of AI-centred activities authentically.

Credentials and Broadcasting

PUSHAS will also record all AI decision-making processes and will keep records of audits as well as compliance documentation.

Shareholder Effort

PUSHAS engages client feedback about AI utilization and implements it to enhance practices with policies progressively.

Transmission of Consistent Checks

Performance Review

This part is based on the working culture of the PUSHAS which will check on these systems to determine if they are ethical and useful to the business. Evaluation criteria will encompass accuracy, fairness, and impact during the performance of the reviews (Jin et al., 2023, p. 3).

Adaptation to AI Evolution

Nowadays most organizations inserting AI technology such as PUSHAS will periodically change their systems and this Code of Conduct helps them to meet new AI technology advancements and emerging issues.






Conclusion

In conclusion, the commitment of PUSHAS to reasonable, ethical, and legal usage of Artificial Intelligence is a guarantee under this Code of Conduct. This framework ensures the safety of customer information and their privacy which is crucial when creating trust with the customer base while putting PUSHAS as a company of good standing in the sneaker resale market. When these principles are embraced by PUSHAS then the application of the set AI specifications becomes a reality while respect for fairness, integrity, and respect are sustained. This Code should be closely followed by all employees and partners of PUSHAS and any applied AI must align the organization and its goals as well as the satisfaction of its consumers.


References

Bae, Y., Choi, J., Gantumur, M. & Kim, N. (2022) ‘Technology-based strategies for online secondhand platforms promoting sustainable retailing’, Sustainability14(6), pp. 1-37. <https://doi.org/10.3390/su14063259>

Gupta, R., Tanwar, S., Al-Turjman, F., Italiya, P., Nauman, A. & Kim, S.W., (2020) Smart contract privacy protection using AI in cyber-physical systems: tools, techniques and challenges’, IEEE access8, pp.24746-24772. <https://ieeexplore.ieee.org/abstract/document/8976143/>

Jin, D., Wang, L., Zhang, H., Zheng, Y., Ding, W., Xia, F. & Pan, S. (2023) ‘A survey on fairness-aware recommender systems’, Information Fusion100, pp. 1-22. <https://doi.org/10.1016/j.inffus.2023.101906>

Panay, B., Baloian, N., Pino, J.A., Peñafiel, S., Frez, J., Fuenzalida, C., Sanson, H. & Zurita, G. (2021) ‘Forecasting key retail performance indicators using interpretable regression’, Sensors21(5), pp. 1-18. <https://doi.org/10.3390/s21051874>

Schmidt, R. & Trautmann, S.T. (2023) ‘Implementing (un) fair procedures: Containing favoritism when unequal outcomes are inevitable’, The Journal of Law, Economics, and Organization39(1), pp.199-234. <https://doi.org/10.1093/jleo/ewab019>

Stefana, E., Marciano, F., Paltrinieri, N. & Cocca, P. (2024) ‘A systematic approach to develop safety-related undesired event databases for Machine Learning analyses: Application to confined space incidents’, Process Safety and Environmental Protection182, pp.279-297. <https://doi.org/10.1016/j.psep.2023.11.046>

Vössing, M., Kühl, N., Lind, M. & Satzger, G. (2022) ‘Designing transparency for effective human-AI collaboration’, Information Systems Frontiers24(3), pp.877-895. <https://doi.org/10.1007/s10796-022-10284-3>

11


FAQ's