Navid Malek Personal Blog



This is a very straight forward program for someone who wants to learn bash in a matter of hours and in a efficient way.

the Guide consists of two main steps:

  • ✔ Getting familiar with bash script
    ✔ Implement a simple bash script program

Motivation (summary) :


Well according to RFC959 FTP does not support search meaning you
have to find your desired files all by yourself.
As a computer programmer this is awful for us, so we have decided
to implement a simple bash script program that is simply FTP
client with search!


YOUR SCRIPT = FTP IN TERMINAL + SEARCH SUPPORT

DOWNLOAD FULL DESCRIPTION HERE

My Approach:

i believe this is the easiest way!
There are ways that you can go on to be able to search in FTP as a file, therefore you must first make a shadow of FTP in your linux native filesystem. I will use curlftpfs inorder to mount the FTP in my /mnt/FTPSearch ( I will simply mount the FTP on my own machine )

The main project for Operating Systems course ( Spring 2018 )

Project Description ( summary ):

The goal of this project is to gather information about incoming and outgoing packets in system. ( some kind of packet capturing )

The desired informations are :

  • Lenght of packet
  • Protocols of packet ( in all available layers of network except application layer, for example Ethernet,IP,TCP )
  • Hash value of packet
  • Total Processing time of packet


Phase one :

  1. implement a systemcall with a single integer input indicating what information you desire from packets and single output buffer to copy the data from kernel space to user space.
  2. an interactive user space program that talks to the user and systemcall call above ( clean input and output for user ).

    Phase two :
    1. implement a kernel module with a single proc entry file for input indicating what information you desire from packets and single proc entry file for output and to copy the data from kernel space to user space.
    2. an interactive user space program that talks to the user and kernel module and proc entry files above ( clean input and output for user ).

    Phase three :

    Performance comparsion of kernel module and system call ( the first two phases ).


    DOWNLOAD PROJECT DESCRIPTIONS IN DETAIL

    Approach Taken for answer:

    All of the desired informations are in sk_buff data structure.
    my approach was to clone sk_buff from driver, right before the driver ( here : e1000 ) wants to pass the packet to the next network layer handler ( application or the NIC ). with this approach i will have a clone for each packet.
    now that i have the information the rest is easy, just copy the desired information from the sk_buff to the output buffer and copy from there to user space.


These are the materials for my mini mini DB project course.

My colleague for this project is Reza Rahimi.

the proposal :

http://cdn.persiangig.com/dl/udJJ2/caeR5C3wrE/proposal.pdf

the logical design (table designs):

http://cdn.persiangig.com/dl/kFPAI/nsDrN3R4M9/tables.pdf

unfortunataly we have done this project on my colleagues laptop which broke after the project, so these files are not complete but yet they are better than nothing.

the SQL commands:

http://cdn.persiangig.com/preview/c4W0elqro6/DB_project_sql.sql

the project application (mini backend and mini frontend with database conncetion in python and tkinter):

http://cdn.persiangig.com/preview/itFtDlTow1/DB_project.py



ERD :


This is a very straight forward program for someone who wants to learn bash in a matter of hours and in a efficient way.

the Guide consists of two main steps:

✔ Getting familiar with bash script
✔ Implement a simple bash script program

Motivation (summary) :


Well according to RFC959 FTP does not support search meaning you
have to find your desired files all by yourself.
As a computer programmer this is awful for us, so we have decided
to implement a simple bash script program that is simply FTP
client with search!


YOUR SCRIPT = FTP IN TERMINAL + SEARCH SUPPORT

DOWNLOAD FULL DESCRIPTION HERE

My Approach:

i believe this is the easiest way!
There are ways that you can go on to be able to search in FTP as a file, therefore you must first make a shadow of FTP in your linux native filesystem. I will use curlftpfs inorder to mount the FTP in my /mnt/FTPSearch ( I will simply mount the FTP on my own machine )

Here are the slides from my presentation at Nopayar, when I used to be a DevOps Engineer.

The slide contains many best practices gathered (not written) by me for better scaling and high-availibility of applications. ( mainly web applications)

The slide covers:

  • ScalingHierarchy
  • Pinterest Best Practices
  • Data Partitioning and Sharding Patterns
  • Sharding Considerations and Techniques
  • ProxySQLSharding and
  • ID Generation Approaches
  • Query Optimization


I've tried to put a refrence for each section for more information, but there are some sections that don't have any refrence at all, you can easily find the refrences with a quick search in Internet.


Scalability Considerations from

navid malek


In this post we will learn how to proxy all the network's traffic (TCP and UDP) transparently through TOR with least difficulty.

Download full tutorial from here

Requirements:

  • Linux OS (tested on alpine and ubuntu)

  • iptables (Linux firewall)

  • RedSocks

What is RedSocks?

Reference:

https://github.com/darkk/redsocks

Redsocks is the tool that allows you to proxify(redirect) network traffic through a SOCKS4, SOCKS5 or HTTPs proxy server. It works on the lowest level, the kernel level (iptables). The other possible way is to use application level proxy, when the proxy client is implemented in the same language as an application is written in. Redsocks operates on the lowest system level, that’s why all running application don’t even have an idea that network traffic is sent through a proxy server, as a result it is called a transparent proxy redirector.

System’s Architecture and Setup for TCP Connections

So this is the big image, almost every tcp packet will be redirected to port 12345 which redsocks service listens for incoming packets; after that, redsocks will redirect the received traffic to another ip and port in socks protocol format.

Also have in mind that for iptables in docker, you have to use docker run --privileged flag

Download full tutorial from here


In this post I will show you not only how to run any multimedia application inside docker, but also in efficient and easy way.

Download full tutorial from here

Requirements:

  • Linux OS and

    Docker (tested on ubuntu)

  • X or

    Wayland (Linux Display Servers)

    • Ensure that the packages for an X or Wayland server are present on the Docker host. Please consult your distribution's documentation if you're not sure what to install. A display server does not need to be running ahead of time.

  • X11docker

    • x11docker allows Docker-based applications to utilize X and/or Wayland on the host. Please follow the x11docker

      installation instructions and ensure that you have a

      working setup on the Docker host.

What is X11Docker?

Reference:

https://github.com/mviereck/x11docker/

x11docker allows to run graphical applications (or entire desktops) in Docker Linux containers.

  • Docker allows to run applications in an isolated

    container environment. Containers need much less resources than

    virtual machines for similar tasks.

  • Docker does not provide a

    display server that would allow to run applications with a

    graphical user interface.

  • x11docker fills the gap. It runs an

    X display server on the host system and provides it to Docker containers.

  • Additionally x11docker does some

    security setup to enhance container isolation and to avoid X security leaks. This allows a

    sandbox environment that fairly well protects the host system from possibly malicious or buggy software.

Download full tutorial from here


NetBill Protocol in Theory

What is NetBill? [From Original Paper]

NetBill is a system for micropayments for information goods (digital commodities) on the Internet. A customer, represented by a client computer, wishes to buy information from a merchant’s server. An account server (the NetBill server), maintains accounts for both customers and merchants, linked to conventional financial institutions.

The NetBill Transaction Model [From Original Paper]

The NetBill transaction model involves three parties: the customer, the merchant and the NetBill transaction server. A transaction involves three phases: price negotiation, goods delivery, and payment.

NetBill Transaction Model

Download Implementation From Github With Detailed Explanations

Implementation Of NetBill Protocol

In this project me (

Navid Malek) and my fellow friend

Reza Rahimi implemented most of NetBill transaction protocol, including:

  • Transaction Protocol

Paper Sections

3.2. The Price Request Phase

3.3. The Goods Delivery Phase

3.4. The Payment Phase

  • Error recovery (Not enough balance, Courruption, No access, etc.)
  • Pseudonyms Protocol

Paper Sections

4.2. Pseudonyms

  • Access Control Mechanism

Mini Access contol app not According to paper

Approach

Our main focus was to implement the protocol, so the approach we take was to use intermediary files that act as Sockets; hence, for various steps of protocol istead of writing data into socket and read from it, we have used files. In the next section, I have provided more details about the files and codes presented.

How To Run

  1. run the following command in terminal: git clone

    https://github.com/navidpadid/NetBill_Transaction_Protocol/

  2. run the codes

Here are various scenarios which I've ran the code from a fresh clone of repository.

Some scenarios include: with/without pseudonyms, with/without access to buy, with/wihtout NetBill account, with/wihtout enough credits to buy a commodity.

Download Implementation From Github With Detailed Explanations


This appliaction prototype was part of my project for E-Commerce course.

Loading Prototype Tour

It was made with proto.io

There are other resouces for this application such as: Detailed Bussiness plan, Business model, App Workflow, etc.

Since they were available in persian, i didn't upload them on the Internet, but if anyone is interested in documents, just drop me an email!

How to run the prototype?

Just open index.html with a browser.

Download From Github With Explanations



Download From Github With Explanations


Various limited documents from my work tools when I used to work as DevOps Engineer, these are basic usage and limited testing results ( because I don't have the premission to publicize the full documents that I have written for my work )

Description:

ETCD_CLUSTER.pdf ==> Setting up an ETCD cluster

objective:

• setup an etcd cluster on 3 servers

• write appropriate service to be sure ETCD will be always running

  • ETCD version ==> 3.3.9

  • server OS ==> CentOS7

The related ansible code are in the mycontrolansible directory.

LizardFS.pdf ==> Setting up LizardFS and testing it

objective:

• setup simple LizardFS master on one server

• test LizardFS performance on 3,5,7 chunk servers (HDD)

• test LizardFS performance on SSD

  • LizardFS version ==> 3.12

ProxySQL.pdf ==> Set up and testing ProxySQL

objectives:

• setup ProxySQL load balancer on one server

• configuring ProxySQL

• test the performance and load balancing of ProxySQL with SYSBENCH

• setup ProxySQL architecture with no single point of failure

telegrafLogparser.pdf ==> Using Telegraf to Parse Custom Logs

objective:

• read custom logs with telegraf

• parse custom logs with telegraf

• generate an output from custom logs to influxdb

The TIG stack [ telegraf, influxdb, grafana ]

DatabaseTestDoc.pdf ==> Testing Database response time

We will test 4 databases ( this doc is very limited and acts as a road map for more professional tests ):

RDB: -mysql -postgres

No-SQL: -mongo -cassandra

GrafanaPrometheus.pdf ==> Setup Prometheus and Grafana for monitoring (very basic)

objectives:

• setup Prometheus server

• setup Prometheus exporters

• setup Grafana server

• setup Grafana dashboard

GalleraCluster.pdf ==> Setting up a MySQL Gallera cluster (very basic)

objective:

• setup an MySQL Gallera cluster on 3 servers

• write apporopiate service to be sure MySQL Gallera cluster will be always running

  • MySQL version ==> 5.7

  • MySQL-wsrep ==> 5.7

  • server OS ==> CentOS7

TCP_tune.pdf ==> Some TCP tuning parameters that i have gathered form internet ( this is a messy doc, just gathered information from Internet, for more info on TCP_TUNING refer to my blog! )

 

Download From Github With Explanations


In this post I will show you not only how to run any multimedia application inside docker, but also in efficient and easy way.

Download full tutorial from here

Requirements:

  • Linux OS and

    Docker (tested on ubuntu)

  • X or

    Wayland (Linux Display Servers)

    • Ensure that the packages for an X or Wayland server are present on the Docker host. Please consult your distribution's documentation if you're not sure what to install. A display server does not need to be running ahead of time.

  • X11docker

    • x11docker allows Docker-based applications to utilize X and/or Wayland on the host. Please follow the x11docker

      installation instructions and ensure that you have a

      working setup on the Docker host.

What is X11Docker?

Reference:

https://github.com/mviereck/x11docker/

x11docker allows to run graphical applications (or entire desktops) in Docker Linux containers.

  • Docker allows to run applications in an isolated

    container environment. Containers need much less resources than

    virtual machines for similar tasks.

  • Docker does not provide a

    display server that would allow to run applications with a

    graphical user interface.

  • x11docker fills the gap. It runs an

    X display server on the host system and provides it to Docker containers.

  • Additionally x11docker does some

    security setup to enhance container isolation and to avoid X security leaks. This allows a

    sandbox environment that fairly well protects the host system from possibly malicious or buggy software.

Download full tutorial from here


Here are the slides from my presentation at Nopayar, when I used to be a DevOps Engineer.

The slide contains many best practices gathered (not written) by me for better scaling and high-availibility of applications. ( mainly web applications)

The slide covers:

  • ScalingHierarchy
  • Pinterest Best Practices
  • Data Partitioning and Sharding Patterns
  • Sharding Considerations and Techniques
  • ProxySQLSharding and
  • ID Generation Approaches
  • Query Optimization


I've tried to put a refrence for each section for more information, but there are some sections that don't have any refrence at all, you can easily find the refrences with a quick search in Internet.


Scalability Considerations from

navid malek


This is a very straight forward program for someone who wants to learn bash in a matter of hours and also in a efficient way.

the Guide consists of two main steps:

✔ Getting familiar with bash script
✔ Implement a simple bash script program

Motivation (summary) :


Well according to RFC959 FTP does not support search meaning you
have to find your desired files all by yourself.
As a computer programmer this is awful for us, so we have decided
to implement a simple bash script program that is simply FTP
client with search!


YOUR SCRIPT = FTP IN TERMINAL + SEARCH SUPPORT

DOWNLOAD FULL DESCRIPTION HERE

My Approach:

i believe this is the easiest way!
There are ways that you can go on to be able to search in FTP as a file, therefore you must first make a shadow of FTP in your linux native filesystem. I will use curlftpfs inorder to mount the FTP in my /mnt/FTPSearch ( I will simply mount the FTP on my own machine )

The main project that me and my colleague designed for Operating Systems course ( Spring 2018 ) - TA of OS

Project Description ( summary ):

The goal of this project is to gather information about incoming and outgoing packets in system. ( some kind of packet capturing )

The desired informations are :

  • Lenght of packet
  • Protocols of packet ( in all available layers of network except application layer, for example Ethernet,IP,TCP )
  • Hash value of packet
  • Total Processing time of packet


Phase one :

  1. implement a systemcall with a single integer input indicating what information you desire from packets and single output buffer to copy the data from kernel space to user space.
  2. an interactive user space program that talks to the user and systemcall call above ( clean input and output for user ).

    Phase two :
    1. implement a kernel module with a single proc entry file for input indicating what information you desire from packets and single proc entry file for output and to copy the data from kernel space to user space.
    2. an interactive user space program that talks to the user and kernel module and proc entry files above ( clean input and output for user ).

    Phase three :

    Performance comparsion of kernel module and system call ( the first two phases ).


    DOWNLOAD PROJECT DESCRIPTIONS IN DETAIL

    Approach Taken for answer:

    All of the desired informations are in sk_buff data structure.
    my approach was to clone sk_buff from driver, right before the driver ( here : e1000 ) wants to pass the packet to the next network layer handler ( application or the NIC ). with this approach i will have a clone for each packet.
    now that i have the information the rest is easy, just copy the desired information from the sk_buff to the output buffer and copy from there to user space.


The main project that me and my colleague designed for Operating Systems course ( Spring 2018 ) - TA of OS

Project Description ( summary ):

The goal of this project is to gather information about incoming and outgoing packets in system. ( some kind of packet capturing )

The desired informations are :

  • Lenght of packet
  • Protocols of packet ( in all available layers of network except application layer, for example Ethernet,IP,TCP )
  • Hash value of packet
  • Total Processing time of packet


Phase one :

  1. implement a systemcall with a single integer input indicating what information you desire from packets and single output buffer to copy the data from kernel space to user space.
  2. an interactive user space program that talks to the user and systemcall call above ( clean input and output for user ).

    Phase two :
    1. implement a kernel module with a single proc entry file for input indicating what information you desire from packets and single proc entry file for output and to copy the data from kernel space to user space.
    2. an interactive user space program that talks to the user and kernel module and proc entry files above ( clean input and output for user ).

    Phase three :

    Performance comparsion of kernel module and system call ( the first two phases ).


    DOWNLOAD PROJECT DESCRIPTIONS IN DETAIL

    Approach Taken for answer:

    All of the desired informations are in sk_buff data structure.
    my approach was to clone sk_buff from driver, right before the driver ( here : e1000 ) wants to pass the packet to the next network layer handler ( application or the NIC ). with this approach i will have a clone for each packet.
    now that i have the information the rest is easy, just copy the desired information from the sk_buff to the output buffer and copy from there to user space.


RL image missing

What is Reinforcement Learning about?

In contrast to supervised learning where machines learn from examples that include the correct decision and unsupervised learning where machines discover patterns in the data, reinforcement learning allows machines to learn from partial, implicit and delayed feedback. This is particularly useful in sequential decision making tasks where a machine repeatedly interacts with the environment or users. Applications of reinforcement learning include robotic control, autonomous vehicles, game playing, conversational agents, assistive technologies, computational finance, operations research, etc

Disclaimer!

This repository mainly contains my assignments for

this Reinforcement Learning course, which was offered in Fall 2021 at UWaterloo by Professor

Pascal Poupart. Because of the academic integrity, I don't have the permission to post this repository publicly online; therefore, this repository is only accessible upon explicit request to me as defined in

this document.


Download From Github With Explanations [PRIVATE REPO, ONLY ACCESSIBLE BY EXPLICIT REQUEST]

Part 1

Summary:

  • Markov Decision Process [from scratch in Python]
    • value iteration
    • policy iteration
    • modified policy iteration
  • Maze problem to test above algorithms
  • Compare the performance of each algorithm
  • Q-Learning [from scratch in Python]
  • Use matplotlib to compare the effect of the Q-Learning parameters on the cumulative discounted rewards per episode
  • deep Q-network to solve the CartPole problem from Open AI Gym
    • Using Agents library from TensorFlow
  • Use matplotlib to compare the effect of the deep Q-network parameters on the average cumulative discounted rewards [also averaged across several runs to reduce stochasity]
  • More details:

    https://cs.uwaterloo.ca/~ppoupart/teaching/cs885-fall21/assignments.html assignment 1 section

Part 2

Summary:

  • Bandit algorithms from scratch in Python
    • epsilon-greedy
    • Thompson sampling
    • UCB
  • REINFORCE algorithm from scratch in Python
  • model-based RL algorithm from scratch in Python
  • Soft Q-Learning in Pytorch
  • Soft Actor Critic in Pytorch
  • Discussion over the properties of each algorithms and their effect over the performance
  • More details:

    https://cs.uwaterloo.ca/~ppoupart/teaching/cs885-fall21/assignments.html assignment 2 section

Part 3

  • Partially Observable RL
    • Deep Recurrent Q learning (DRQN) algorithm in Pytorch
      • Using LSTM and MLP
      • Compare to Deep Q Network's performance
  • Generative Adversarial Imitation Learning (GAIL) algorithm in Pytorch
    • Using deterministic policy gradient update technique
    • Compare to Behavior Cloning's (BC) performance
  • Categorical (C51) distributional RL algorithm
    • Compare to DQN on the Cartpole domain with epsilon greedy exploration
  • More details:

    https://cs.uwaterloo.ca/~ppoupart/teaching/cs885-fall21/assignments.html assignment 3 section

Download From Github With Explanations [PRIVATE REPO, ONLY ACCESSIBLE BY EXPLICIT REQUEST]





DKMA image missing

What is this repo about?

There is a dire need for effective methods to model and analyze the data and extract useful knowledge from it and to know how to act on it. In this series of notebooks you will learn the fundamental tools for assessing, preparing and analyzing data. You will learn to design a data and analysis pipeline to move from raw data to task solution. You will learn to implement a variety of analytical and machine learning algorithms to including supervised, unsupervised and other learning approaches.

Download From Github With Explanations

Part 1

Summary:

  • Load and work with two famous datasets "Iris" and "Heart Disease"
  • Data cleaning approaches: filling missing values, noise reduction, normalization, and visualization
  • Visualization for understanding data: pair plots, scatter plots, correlation and data distribution analysis
  • Statistical analysis on data: correlation coefficient, statistical variables
  • KNN classifier with Sckit-learn: parameter tuning with cross validation, metrics analysis, plot analysis, AUC method analysis
  • Further tuning KNN classifier: weighted KNN approaches, algorithm selection, speed, etc.

Part 2

Summary:

  • Two datasets: John Hopkins University CSSE COVID-19 (

    https://github.com/CSSEGISandData/COVID-19/tree/master/csse_ covid_19_data), US 2020 Census

  • Preprocessing data: data cleaning, outlier dealing, normalization, missing value, etc.
  • Representation Learning: PCA, LDA, scree-plot and statistcial analysis, visualization insights, comparing the algorithms
  • Data analysis for classification: original, hybrid, or LDA/PCA constructed data
  • Tree based algorithms for classification with extensive analysis: Decision trees, Random forrest, parameter tuning, group k-fold cross validation, Gradient Tree Boosting
  • Naive bayes classifier (NB): var smoothing analysis
  • Comparing the performance of NB compared to the decision tree approaches

Part 3

  • Preprocessing data (outlier removal, feature selection, normalization, train-test split, creating 3 different training sets for the 3 targets, etc.)
  • Deep neural network: MLP, model and architecture analysis, tuning the hyperparameters, class weights
  • LSTM networks: model optimization, L2 regularization, activation functions, dropout, batch normalization
  • Deep MLP vs LSTM: thorough analysis (time, accuracy, number of parameters, etc.)
  • Convolutional neural network: Parameter and architecture tuning, padding, activation, classificaiton layer
  • ResNet CNN model: thorough analysis and comparison to the previous CNN model (time, depth, number of parameters)

Download From Github With Explanations

These codes were written by

Navid Malekghaini and

Soheil Johari.




This text document (which is a .c file format only for fancy markup by default) is a quick intro to

VIM editor that could be very useful both as a mini cheat sheet and a guide to start using it at an intermediate level without any previous knowledge.

BONUS: Also a very quick intro for TMUX is available under the "miniTMUX.txt" file.


Check it out here!



آخرین ارسال ها

آخرین وبلاگ ها

آخرین جستجو ها