AWS SAP Notes 03 - Storage Services

FSx

FSx For Windows File Servers

  • FSx for Windows are fully managed native Windows file servers/file shares
  • Designed for integration with Windows environments
  • Integrates with Directory Service or Self-Managed AD
  • Resilient and highly available service. Can be deployed in single or multi-AZ within a VPC
  • We can perform on-demand and scheduled backups using FSx
  • FSx can be accessed over VPC peering, VPN and DX
  • FSx supports de-duplication, scaling through Distributed File System (DFS), KMS at rest encryption and enforced encryption in transit
  • Allows for volumes shadow copies, we can initiate previous versions for a file
  • It is highly performant: 8 MB/s up to 2 GB/s throughput, 100k's IOPS, <1ms latency
  • Features:
    • VSS - user-driven restores (view previous versions)
    • Native file system accessible over SMB
    • Uses Windows permission model
    • DFS - scale-out file share structure
    • Managed - no file server admin

FSx for Lustre

  • File system designed for high performance workloads
  • Is a managed implementation of the Lustre file system, designed for HPC - Linux clients (POSIX file system)
  • Lustre is designed for machine learning, big data, financial modelling
  • Can scale to 100's GB/s throughput and offers sub millisecond latency
  • Can be provisioned using 2 different deployment types:
    • Persistent: provides HA in one AZ only, provides self-healing, recommended for long term data storage
    • Scratch: highly optimized for short term solutions, no replication is provided
  • FSx is available over VPN or DX for on-premises
  • S3 repository: files are stored in S3 and they are lazily loaded into FSx for Lustre file system at first usage
  • Sync changes between the file system and S3: hsm_archive command. The file system and the S3 bucket are not automatically in sync
  • Lustre file system:
    • MST - Metadata stored on Metadata Targets
    • OST - Objects are stored on object storage targets, each can be up to 1.17 TiB
  • Baseline performance of the file system is based on the size:
    • Size: min 1.2 TiB and then we can use increments of 2.4 TiB
    • Scratch: base 200 MB/s per TiB of storage
    • Performance offers: 50 MB/s, 100 MB/s and 200 MB/s per TiB storage
    • For both types we can burst up to 1300 MB/s per TiB using credits

EFS - Elastic File System

  • It is a network based file system which can be mounted on Linux based instance
  • Can be mounted to multiple instances at once
  • EFS it is an implementation of the NFSv4 file system format
  • EFS file systems can be mounted in a folder in Linux based operating systems
  • EFS storage exists separately from the lifecycle of an EC2 instance
  • It can be shared between many EC2 instances
  • It is a private service, it can be mounted via mount targets inside a VPC. By default an EFS it is isolated to the VPC in which was provisioned
  • EFS can be accessed outside of the VPC over hybrid networking: VPN or DX. EFS is a great tool for storage handling across multiple units of compute
  • EFS is accessible for Lambda functions. Lambda has to be configured to use VPC networking in order to use EFS
  • Mount targets: provide IP addresses in the range of the VPC. For HA we should provision mount targets in every availability zones present in a VPC
  • EFS offers 2 performance modes:
    • General Purpose: ideal for latency sensitive use cases (it is the default)
    • Max I/O: can be used to scale to higher levels of aggregate throughput. Has higher latencies
  • EFS offers 2 different throughput modes:
    • Bursting (default): works similar to EBS GP2 storage
    • Provisioned: we can specify throughput requirements independent of the size
  • EFS offers 2 storage classes:
    • Standard
    • Infrequent Access (IA)
  • We can automatically move data between these 2 classes using lifecycle policies

S3

Storage Classes

  • S3 Standards (default): alt text
    • The objects are stored in at least 3 AZs
    • Provides eleven nines of availability
    • The replication is using MD5 file checks together with CRCs to detect object issues
    • When objects are stored in S3 using the API, a HTTP 200 OK response is provided
    • Billing:
      • GB/month of data stored in S3
      • A dollar for GB charge transfer out (in is free)
      • Price per 1000 requests
      • No specific retrieval fee, no minimum duration, no minimum size
    • S3 standard makes data accessible immediately, can be used for static website hosting
    • Should be used for data frequently accessed
  • S3 Standard-IA: alt text
    • Shares most of the characteristics of S3 standard: objects are replicated in 3 AZs, durability is the same, availability is the same, first byte latency is the same, objects can be made publicly available
    • Billing:
      • It is more cost effective for storing data
      • Data transfer fee is the same as S3 standard
      • Retrieval fee: for every GB of data there is a retrieval fee, overall cost may increase with frequent data access
      • Minimum duration charge: we will be billed for a minimum of 30 days, minimum capacity of the objects being 128KB (smaller objects will be billed as being 128 KB)
      • Should be used for long lived data where data access is infrequent
  • S3 One Zone-IA: alt text
    • Similar to S3 standard, but cheaper. Also cheaper than S3 standard IA
    • Data stored using this class is only stored in one region
    • Billing:
      • Similar to S3 standard IA: similar minimum duration fee of 30 days, similar billing for smaller objects and also similar retrieval fee per GB
      • Same level of durability (if the AZ does not fail)
      • Data is replicated inside one AZ
    • Since data is not replicated between AZs, this storage class is not HA. It should be used for non-critical data or for data that can be reproduced easily
  • S3 Glacier: alt text
    • Same data replication as S3 standard and S3 standard IA
    • Same durability characteristics
    • Storage cost is about 1/5 of S3 standard
    • S3 objects stored in Glacier should be considered cold objects (should not be accessed frequently)
    • Objects in Glacier class are just pointers to real objects and they can not be made public
    • In order to retrieve them, we have to perform a retrieval process:
      • A job that needs to be done to get access to objects
      • Retrievals processes are billed
      • When objects are retrieved for Glacier, they are temporarily stored in standard IA and they are removed after a while. We can retrieve them permanently as well
    • Retrieval process types:
      • Expedited: objects are retrieved in 1-5 minutes, retrieval process being the most expensive
      • Standard: data is accessible at 3-5 hours
      • Bulk: data is accessible at 5-12 hours at lower cost
    • Glacier has a 40KB minimum billable size and a 90 days minimum duration for storage
    • Glacier should be used for data archival, where data can be retrieved in minutes to hours
  • S3 Glacier Deep Archive: alt text
    • Approximately 1/4 of the price of standard Glacier
    • Deep Archive represents data in a frozen state
    • Has a 40KB minimum billable data size and a 180 days minimum duration for data storage
    • Objects can not be made publicly available, data access is similar to standard Glacier class
    • Restore jobs are longer:
      • Standard: up to 12 hours
      • Bulk: up to 48 hours
    • Should be used for archival which is very rarely accessed
  • S3 Intelligent-Tiering: alt text
    • It is a storage class containing 4 different tiering a storage
    • Objects that are access frequently are stored in the Frequent Access tier, less frequently accessed objects are stored in the Infrequent Access tier. Objects accessed very infrequently will be stored in either Archive or Deep Archive tier
    • We don't have to worry for moving objects over tier, this is done by the storage class automatically
    • Intelligent tier can be configured, archiving data is optional and can be enabled/disabled
    • There is no retrieval cost for moving data between frequent and infrequent tiers, we will be billed based on the automation cost per 1000 objects
    • S3 Intelligent-Tiering is recommended for unknown or uncertain data access usage
  • Storage classes comparison:
S3 Standard S3 Intelligent-Tiering S3 Standard-IA S3 One Zone-IA S3 Glacier S3 Glacier Deep Archive
Designed for durability 99.999999999% (11 9's) 99.999999999% (11 9's) 99.999999999% (11 9's) 99.999999999% (11 9's) 99.999999999% (11 9's) 99.999999999% (11 9's)
Designed for availability 99.99% 99.9% 99.9% 99.5% 99.99% 99.99%
Availability SLA 99.9% 99% 99% 99% 99.9% 99.9%
Availability Zones ≥3 ≥3 ≥3 1 ≥3 ≥3
Minimum capacity charge per object N/A N/A 128KB 128KB 40KB 40KB
Minimum storage duration charge N/A 30 days 30 days 30 days 90 days 180 days
Retrieval fee N/A N/A per GB retrieved per GB retrieved per GB retrieved per GB retrieved
First byte latency milliseconds milliseconds milliseconds milliseconds select minutes or hours select hours
Storage type Object Object Object Object Object Object
Lifecycle transitions Yes Yes Yes Yes Yes Yes

S3 Lifecycle Configuration

  • We can create lifecycle rules on S3 buckets which can move objects between tiers or expire objects automatically
  • A lifecycle configuration is a set of rules applied to a bucket or a group of objects in a bucket
  • Rules consist of actions:
    • Transition actions: move objects from one tier to another after a certain time
    • Expiration actions: delete objects or versions of objects
  • Objects can not be moved based on how much they are accessed, this can be done by the intelligent tiering. We can move objects based on time passed
  • By moving objects from one tier to another we can save costs, expiring objects also will help saving costs
  • Transitions between tiers: alt text
  • Considerations:
    • Smaller objects cost more in Standard-IA, One Zone-IA, etc.
    • An objects needs to remain for at least 30 days in standard tier before being able to be moved to infrequent tiers (objects can be uploaded manually in infrequent tiers)
    • A single rule can not move objects instantly from standard IA to infrequent tiers and then to Glacier tiers. Objects have to stay for at least 30 days in infrequent tiers before being able to be moved by one rule only. In order ot overcome this, we can define 2 different rules

S3 Replication

  • 2 types of replication are supported by S3:
    • Cross-Region Replication (CRR)
    • Same-Region Replication (SRR)
  • Both types of replication support same account replication and cross-account replication
  • If we configure cross-account replication, we have to define a policy on the destination account to allow replication from the source account
  • We can create replicate all objects from a bucket or we can create rules for a subset of objects
  • We can specify which storage class to use for an object in the destination bucket
  • We can also define the ownership of the objects in the destination bucket. By default it will be the same as the owner in the source bucket
  • Replication Time Control (RTC): if enabled ensures a 15 minutes replication of objects
  • Replication consideration:
    • Replication is not retroactive: only newer objects are replicated after the replication is enabled
    • Versioning needs to be enabled for replication
    • Replication is one-way only
    • Replication is capable of handling objects encrypted with SSE-S3 and SSE-KMS. SSE-C is not supported for replication
    • Replication requires for the owner of source bucket needs permissions on the objects which will be replicated
    • System events will not be replicated
    • Any objects in the Glacier and Glacier Deep Archive will not be replicated
    • Deletion are not replicated!
  • Replication use cases:
    • SRR:
      • Log aggregation
      • PROD and Test sync
      • Resilience with strict sovereignty
    • CRR
      • Global resilience improvements
      • Latency reduction

S3 Encryption

  • Buckets aren't encrypted, objects inside buckets are encrypted
  • Encryption at rest types:
    • Client-Side encryption: data is encrypted before it leaves the client
    • Server-Side encryption: data is encrypted at the server side, it is sent on plain-text format from the client
  • Both encryption types use encryption in-transit for communication
  • There are 3 types of server-side encryption supported:
    • SSE-C: server-side encryption with customer-provided keys
      • Customer is responsible for managing the keys, S3 managed encryption
      • When an object is put into S3, we need to provide the key utilized
      • The object will be encrypted by the key, a hash is generated and stored for the key
      • The key will be discarded after the encryption is done
      • In case of object retrieval, we need to provide the key again
    • SSE-S3: server-side encryption with Amazon S3-managed keys
      • AWS handles both the encryption/decryption and the key management
      • When using this method, S3 creates a master key for the encryption process (handled entirely by S3)
      • When an object is uploaded an unique key is used for encryption. After the encryption, the with the master key, the unique key is encrypted as well and the unencrypted key is discarded. Both the key and the object are stored
      • For most situations, this is the default type of encryption. It uses AES-256 algorithm, they key management is entirely handled bt S3
    • SSE-KMS: Server-side encryption with customer-managed keys stored in AWS Key Management Service (KMS)
      • Similar to SSE-S3, but for this method the KMS handles stored keys
      • When an object is uploaded for the first time, S3 will communicate with KMS an creates a customer master key (CMK). This is default master key used in the future
      • When new objects are uploaded AWS uses the CMK to generate individual keys for encryption (data encryption keys). The data encryption key will be stored along with the object in encrypted format
      • We don't have to use the default CMK provided by AWS, we can use our own CMK. We can control the permission on it and how it is regulated
      • SSE-KMS provides role separation:
        • We can specify who can access the CMK from KMS
        • Administrators can administers buckets but they may not have access to KMS keys
  • Default Bucket Encryption:
    • When an object is uploaded, we can specify which server-side encryption to be used by adding a header to the request: x-amz-server-side-encryption
    • When this header is not specified, objects wont be encrypted, although we can have a default encryption method on the bucket level, which will be used in case this header is missing
    • Values for the header:
      • To use SSE-S3: AES256
      • To use SSE-KMS: aws:kms
    • Default encryption can not be used to restrict encryption type, we can use a bucket policy for that

S3 Presigned URLs

  • Is a way to give other people access to our buckets using our credentials
  • An IAM admin can generate a presigned URL for a specific object using his credentials. This URL will have an expiry date
  • The presigned URL can be given to unauthenticated uses in order to access the object
  • The user will interact with S3 using the presigned URL as if it was the person who generated the presigned URL
  • Presigned URLs can be used for downloads and for uploads
  • Presigned URLs can be used for giving direct access private files to an application user offloading load from the application. This approach will require a service account for the application which will generate the presigned URLs
  • Presigned URL considerations:
    • We can create a presigned ULR for objects we don't have access to
    • When using the URL, the permissions match the identity which generated it. The permissions are evaluated at the moment of accessing the object (it might happen the the identity had its permissions revoked, meaning we wont have access to the object either)
    • We should not generate presigned URLs generated on temporary credentials (assuming an IAM role). When the temporary credentials are expired, the presigned URL will stop working as well

S3 Select and Glacier Select

  • Are ways to retrieve parts of objects instead of entire objects
  • S3 can store huge objects (up to 5 TB)
  • Retrieving a huge objects will take time and consume transfer capacity
  • S3/Glacier provides services to access partial objects using SQL-like statements to select parts of objects
  • Both S3 Select and Glacier selects supports the following formats: CSV, JSON, Parquet, BZIP2 compression for CSV and JSON

S3 Access Points

  • Improves the manageability of objects when buckets are used for many different teams or they contain objects for a large amount of functions
  • Simplify managing access to S3 buckets/objects
  • Rather than 1 bucket (1 bucket policy) access we can create many access points with different policies
  • Each access point can be limited from where it can be accessed, and each can have different network access controls
  • Each access point has its own endpoint address
  • We can create access point using the console or the CLI using aws s3control create-access-point --name < name > --account-id < account-id > --bucket < bucket-name >
  • Any permission defined on the access point needs to be defined on the bucket policy as well

S3 Block Public Access

  • The Amazon S3 Block Public Access feature provides settings for access points, buckets, and accounts to help manage public access to Amazon S3 resources
  • The settings we can configure with the Block Public Access Feature are:
    • IgnorePublicAcls: this prevents any new ACLs to be created or existing ACLs being modified which enable public access to the object. With this alone existing ACLs will not be affected
    • BlockPublicAcls: Any ACLs actions that exist with public access will be ignored, this does not prevent them being created but prevents their effects
    • BlockPublicPolicy: This prevents a bucket policy containing public actions from being created or modified on an S3 bucket, the bucket itself will still allow the existing policy
    • RestrictPublicBuckets: this will prevent non AWS services or authorized users (such as an IAM user or role) from being able to publicly access objects in the bucket

S3 Cost Saving Options

  • S3 Select and Glacier Select: save in network a CPU cost by retrieving ony the necessary data
  • S3 Lifecycle Rules: transition objects between tiers
  • Compress objects to save space
  • S3 Requester Pays:
    • In general, bucket owners pay for all Amazon S3 storage and data transfer costs associated with their bucket
    • With Requester Pays buckets, the requester instead of the bucket owner pays the cost of the request and the data download from the bucket
    • The bucket owner always pays the cost of storing data
    • Helpful when we want to share large datasets with other accounts
    • Requires a bucket policy
    • If an IAM role is assumed, the owner account of that role pays for the request!

S3 Object Lock

  • Object Lock can be enabled on newly created S3 buckets. For existing ones in order to enable Object Lock we have to contact AWS support
  • Versioning will be also enabled when Object Lock is enabled
  • Object Lock can not be disabled, versioning can not be suspended when Object Lock is active
  • Object Lock is a Write-Once-Read-Many (WORM) architecture: when an object is written, can not be modified
  • There are 2 ways S3 managed object retention:
    • Retention Period
    • Legal Hold
  • Objects can have both retention periods enabled
  • Object Lock retentions can be individually defined on object versions, a bucket can have default Object Lock settings

Retention Period

  • When a retention period is enabled on an object, we specify the days and years for the period
  • The retention period will end after the period
  • There are 2 types of retention period modes:
    • Compliance mode:
      • Object can not be adjusted, deleted or overwritten. The retention period can not be reduced, the retention mode can not be adjusted even by the account root user
      • Should be used for compliance reasons
    • Governance mode:
      • Objects can not be adjusted, deleted or overwritten, but special permissions can be added to some identities to allow for the lock setting to be adjusted
      • This identities should have the s3:BypassGovernanceRetention permission
      • The governance mode can be overwritten when passing x-amz-bypass-governance-retention:true header (header is default for console ui)

Legal Hold

  • We don't set a retention period for this type of retention, Legal Hold can be on or off for specific versions of an object
  • We can't delete or overwrite an object with Legal Hold
  • An extra permission is required when we want to add or remove the Legal Hold on an object: s3:PutObjectLegalHold
  • Legal Hold can be used for preventing accidental removals

Amazon Macie

  • It is a data security and data privacy service
  • Macie is a service to discover, monitor and protect data stored in S3 buckets
  • Once enabled and pointed to buckets, Macie will automatically discover data and categorize it as PII, PHI, Finance etc.
  • Macie is using data identifier. There are 2 types of data identifier:
    • Managed Data Identifier: built-in, can use machine learning, pattern matching to analyze and discover data. It is designed to detect sensitive data from many countries
    • Custom Data Identifier: created by clients, they are proprietary to accounts and they are regex based
  • Discovery Jobs: these jobs will use data identifiers to manage and search for sensitive content. They will generate findings which can be used for integration with other AWS service (ex: Event Bridge) in order to do automatic remediation
  • Macie uses multi account architecture: one account is the master account which can manage other accounts to discover sensitive data

Macie Identifiers

  • Data Discovery Jobs: analyzes data in order to determine wether the objects contain sensitive data. This is done using data identifiers
  • Managed Data Identifiers:
    • Created and managed by AWS
    • Can be used to detect a growing list of common sensitive data types: credentials, financial data, health data, personal identifiers (addresses, passports, etc.)
  • Custom Data Identifiers:
    • Can be created by us, AWS account users/owners
    • They are using regex patterns to match data
    • We can add optional keywords: optional sequences that need to be in the proximity to regex match
    • Maximum Match Distance: how close keywords are to regex pattern
    • We can also include ignore words

Macie Findings

  • Macie will produce 2 types of findings:
    • Policy Findings: are generated when the policies or settings are changed in a way that reduces the security of the bucket after Macie is enabled
    • Sensitive Data Findings: generated when sensitive data is identified based on identifiers

EBS and Instance Store

EBS Volume Types

  • General Purpose SSD (GP2/GP3):
    • GP2 is the default storage type for EC2, GP3 is the newer version
    • A GP2 volume can be as small as 1GB or as large as 16TB
    • IO Credit:
      • An IO is 16 KB of data
      • 1 IOPS is 1 IO in 1 second
      • 1 IO credit = 1 IOPS
    • If we have no credits for the volume, we can not perform any IO
    • The IO bucket has 5.4 million of credits, it refills based at rate based on the baseline performance of the storage
    • The baseline performance for GP2 is based on the volume size, we get 3 IO credits per second, per GB of volume size
    • By default GP2 can burst up to 3000 IOPS
    • If we consume more credits than the bucket is refilling, than we are depleting the bucket
    • We have to ensure the buckets are replenishing and not depleting down to 0, otherwise the storage will be unusable
    • Volume larger than 1TB will exceed the burst rate of 3000 IOPS, they will always achieve the baseline performance as standard, they wont use the credit system
    • Max IO 16000 IO credits per second, any volume larger than 5.33 TB will achieve this maximum rate constantly
    • GP2 can be used for boot volumes
    • GP3 is similar to GP2, but it removes the credit system for a simpler way of working:
      • Every GP3 volume starts at a standard 3000 IOPS and 125 MiB/s regardless of volume size
    • Base price for GP3 is 20% cheaper than GP2
    • For more performance we can pay extra cost for up to 16000 IOPS or 1000 MiB/s throughput
  • Provisioned IOPS SSD (IO1/2):
    • There are 3 types of provisioned IOPS storage options: IO1 and its successor IO2 and IO2 BlockExpress (currently in preview)
    • For this storage category the IOPS value can be configured independently of the storage size
    • Provisioned IOPS storages are recommended for usage where consistent low latency and high throughput is required
    • Max IOPS per volume is 64_000 IOPS per volume and 1000 MB/s throughput, while with BlockExpress we can achieve 256_000 IOPS per volume and 4000 MB/s throughput
    • Volume size ranges from 4 GB up to 16 GP for IO2/IO3 and up to 64 TiB for BlockExpress
    • We can allocate IOPS performance values independently of the size of the volume, there is a maximum IOPS value per size:
      • IO1 50 IOPS / GB MAX
      • IO2 500 IOPS / GB MAX
      • Block Express 1000 IOPS / GB MAX
    • Per instance performance: maximum performance between EBS service and EC2. Usually this implies more than one volume in order to be saturated. Max values:
      • IO1 260_000 IOPS and 7500 MB/s (4 volumes to saturate)
      • IO2 160_000 IOPS and 4750 MB/s
      • Block Express 260_000 IOPS and 7500 MB/s
    • Use cases: smaller volumes and super high performance
  • HDD based volume types:
    • There are 2 types of HDD based storages: ST1 Throughput Optimized, SC1 Cold HDD
    • ST1:
      • Cheaper than SSD based volumes, ideal for larger volumes of data
      • Recommended for sequential data, applications when throughput is more important than IOPS
      • Volume size can be between 125 GB and 16 TB
      • Offers maximum 500 IOPS, data is measured in blocks of 1 MB => max throughput of 500 MB/s
      • Works similar as GP2 with a credit system
      • Offer a base performance of 40 MB/s per TB of volume size with bursting to 250 MB/s per TB
      • Designed for frequently accessed sequential data at lower cost
    • SC1:
      • SC1 is cheaper than ST1, has significant trade-offs
      • Geared towards maximum economy when we want to share a lot of data without caring about performance
      • Offers a maximum of 250 IOPS, 250 MB/S throughput
      • Offer a base performance of 12 MB/s per TB of volume size with bursting to 80 MB/s per TB
      • Volume size can be between 125 GB and 16 TB
      • It is the lower cost EBS storage available

Instance Store Volumes

  • Provides block storage devices, raw volumes which can be mounted to a system
  • They are similar to EBS, but they are local drives instead of being presented over the network
  • These volumes are physically connected to the EC2 host, instances on the host can access these volumes
  • Provides the highest storage performance in AWS
  • Instance stores are included in the price of EC2 instances with which they come with
  • Instance stores have to be attached at launch time, they can not be added afterwards
  • If an EC2 instance moves between hosts the instance store volume loses all its data
  • Instances can move between hosts for many reasons: instance are stopped and restarted, maintenance reasons, hardware failure, etc.
  • Instance store volumes are ephemeral volumes!
  • One of the primary benefit of instance stores is performance, ex: D3 instance provides 4.6 GB/s throughput, I3 volumes provide 16 GB/s of sequential throughput with NVMe SSD
  • Instance store considerations:
    • Instance store can be added only at launch
    • Data is lost on an instance stores in case the instance is moved, resized or there is a hardware failure
    • Instance stores provide high performance
    • For instance store volumes we pay for it with the EC2 instance
    • Instance store volumes are temporary!

Choosing between Instance Store and EBS

  • Fer persistence storage we should default to EBS
  • For resilience storage we should avoid instance store an default to EBS
  • If the storage should be isolated from EC2 instance lifecycle we should use EBS
  • Resilience with in-built replication - we can use both, it depends on the situation
  • For high performance needs - we can also use both, it depends on the situation
  • Fos super high performance we should use instance store
  • If cost is a primary concern we can use instance store if it comes with the EC2 instance
  • Cost consideration: cheaper volumes: ST1 or SC1
  • Throughput or streaming: ST1
  • Boot volumes: HDD based volumes are not supported (no ST1 or SC1)
  • GP2/3 - max performance up to 16000 IOPS
  • IO1/2 - up to 64000 IOPS (BlockExpress: 256000)
  • RAID0 + EBS: up to 260000 IOPS (maximum possible on an EC2 instance)
  • For more than 260000 IOPS - use instance store

Ref

Bình luận


White
{{ comment.user.name }}
Bỏ hay Hay
{{comment.like_count}}
Male avatar
{{ comment_error }}
Hủy
   

Hiển thị thử

Chỉnh sửa

White

Nguyễn Huy Hoàng

17 bài viết.
10 người follow
Kipalog
{{userFollowed ? 'Following' : 'Follow'}}
Cùng một tác giả
White
11 4
(Ảnh) Tại hội nghị Build 2016 diễn ra từ ngày 30/3 đến hết ngày 1/4 ở San Francisco, Microsoft đã đưa ra 7 thông báo lớn, quan trọng và mang tầm c...
Nguyễn Huy Hoàng viết hơn 4 năm trước
11 4
White
7 0
Viết code chạy một cách trơn tru ngay lần đầu tiên là một việc rất khó, thậm chí là bất khả thi. Do đó debug là một kỹ năng vô cùng quan trọng đối ...
Nguyễn Huy Hoàng viết hơn 4 năm trước
7 0
Bài viết liên quan
White
0 0
CloudFront It is a content deliver network (CDN) Its job is to improve the delivery of content from its original location to the viewers of the...
Nguyễn Huy Hoàng viết 7 ngày trước
0 0
{{like_count}}

kipalog

{{ comment_count }}

bình luận

{{liked ? "Đã kipalog" : "Kipalog"}}


White
{{userFollowed ? 'Following' : 'Follow'}}
17 bài viết.
10 người follow

 Đầu mục bài viết

Vẫn còn nữa! x

Kipalog vẫn còn rất nhiều bài viết hay và chủ đề thú vị chờ bạn khám phá!