It worked when I give sublist type as inline editor. Before it was LIST type sublist.
su is not accepted after adb shell with Windows prompt.
Any other suggestion? If I have some update, I will write here.
Thank you
How can I enable cross-account KMS access so that Athena in Account B can read from S3 in Account A, where the KMS key is managed?
You need to add a statement to your key policy in account A to allow your IAM principal in account B to decrypt using the key.
{
"Sid": "role-xxxx decrypt",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account-b>:role/role-xxxx"
},
"Action": "kms:Decrypt",
"Resource": "*"
}
Then you also need to add the decrypt permission to the identity policy of the principal accessing the bucket:
{
"Sid": "decrypt",
"Effect": "Allow",
"Action": "kms:Decrypt",
"Resource": "arn:aws:kms:<region>:<account-a>:key/<key-id>"
}
You can confirm the key used for bucket level encryption with aws s3api get-bucket-encryption --bucket <bucket-name> or for a specific object with aws s3api head-object --bucket <bucket-name> --key <key>.
Would updating the KMS key policy in Sandbox to allow decryption from the IAM role in QA resolve this? Any other settings I should check?
You also need to add to the identity policy but yeah, for a principal to read an S3 object encrypted with a KMS key, they need read access to that object and decrypt permission on the key. So if you add these permissions to the correct principal, for the correct key, then all should work. The only other thing to check that I can think of is if the key is in another region, then you'll need a multi-region key with a replica in your region.
Just Change JDK JBR 17 and rebuiild
What's meant by the authorization identifier (I've replaced me by the actual username of the account I am using to connect to the database...
Server version: 8.0.37 MySQL Community Server - GPL
mysql> GRANT PROCESS TO `me`@`%`;
ERROR 1064 (42000): Illegal authorization identifier near 'PROCESS TO `me`@`%`' at line 1
Setting "Trusted_Connection = false" in my connection string fixed the same issue for me as well
In my case I forgot js modules initialized inside base template, while page-specific js-related imports was at the bottom of html's head.
importmap should be above any javascript module imports. Rearranging head imports solved the issue in my case.
I have found a solution to my original question.
How 'clean' it is I don't know but it works
Option 1 - Offset each // Lines to Plot
Original
// Lines to Plot
targetTime = timestamp(2025,03,14,06,54,00)
drawVerticalLine(targetTime)
Solution
targetTime = timestamp(2025,03,14,06,54,00)
offsetTime = targetTime - (time - time[1])
drawVerticalLine(offsetTime)
Option 2 - Offset within // Function drawVerticalLine
// Function drawVerticalLine
drawVerticalLine(targetTime) =>
line.new (
x1=targetTime - (time - time[1]) ,
y1=low,
x2=targetTime - (time - time[1]) ,
y2=high,
xloc=xloc.bar_time,
extend=extend.both,
color=color.new(#f9961e, 10),
style=line.style_solid,
width=1)
Logic
calculates the duration of 1 bar
time - time[1]
calculates the duration of 2 bars
time - time[2]
subtracts 1 bar
- (time - time[1])
adds 1 bar
+ (time - time[1])
adds 2 bars
+ (time - time[2])
to combine all solutions and comments:
=(a2/(1000*60*60*24)+25569)
format cell as wished:
yyyy-mm-dd hh:mm:ss
DD.MM.YYYY HH:MM:SS
MM/DD/YYYY HH:MM:SS
or german excel version
So, I had the same issue here. I found the sandbox = "" in the code editor and just deleted it, and it seemed to work. Thanks, guys. So note to self, and everyone, the HTML editor will put in a sandbox = "" element.
ok, that works great, in 100% of all cases so far
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as tcp:
try:
tcp.settimeout(self.timeout)
tcp.connect((self.ip, self.port))
tcp.settimeout(None)
i = 0
data = []
while len(data) != 7 or i > 10:
tcp.sendall(b"\x61\x03\x00\x20\x00\x01\x8c\x60")
time.sleep(0.05)
data = tcp.recv(self.bytes)
i += 1
tcp.close()
50ms waiting time has solved the problem. The 10-fold white loop is only for additional security
To select only matching rows from two table:
Use INNER JOIN
select * from table_a as t1 INNER JOIN table_b as t2 ON t1.column1=t2.column1
Hi Ahtisham , I've encountered the same issues 2 weeks ago. have you found any solution to it?
Deploy SSIS Packages Using Active Directory - Integrated (ADINT) in a GitHub Actions file? Error:
Failed to connect to the SQL Server 'XXXXXXXXXXXXX': Failed to connect to server XXXXXXXXXXXXX. Deploy failed: Failed to connect to server XXXXXXXXXXXXX. Error: Process completed with exit code 1.
The error suggests that the SQL Server connection is failing when using OIDC. However, I have successfully connected to the server using OIDC.
Follow the below steps which I have tried with:
Step:1 To set up OIDC for authentication with SQL Server using Microsoft Entra ID, start by registering an application in the Microsoft Entra portal. Navigate to App registrations, then click New registration, and provide a name for the app. After registration, note down the Application (client) ID and Directory (tenant) ID.
Step:2
In the Microsoft Entra ID App Registration, navigate to Certificates & Secrets > Federated Credentials, and click + Add Federated Credential. Configure the Federated Credential details by setting the Issuer to https://token.actions.githubusercontent.com, the Organization to your GitHub organization name (e.g., myorg), and the Repository to your GitHub repository name (e.g., ssis-deploy). Set the Entity type to Environment, and the GitHub Environment Name to your specific environment (e.g., production). For the Subject Identifier, use repo:myorg/ssis-deploy:environment:production, replacing it with your specific repository and environment details, then click Add.
Step:3 To grant the GitHubDeploySSIS App Registration access to Azure SQL (SSISDB), navigate to your Azure SQL Server, go to Microsoft Entra ID admin, click Set admin, select GitHubDeploySSIS, then click Select and finally click Save.
Step:4
To set up your GitHub repository, first create a new repository named ssis-deploy (or your preferred name) and make it private. Add a README file for documentation. Next, go to the Settings of your GitHub repository, navigate to Secrets and Variables > Actions > New repository secret, and add the following secrets: AZURE_CLIENT_ID (your Azure client ID), AZURE_SUBSCRIPTION_ID (your Azure subscription ID from the Azure portal), and AZURE_TENANT_ID (your Azure tenant ID).
Step:5
To set up a GitHub Actions workflow for testing the connection, create a new file under .github/workflows in your GitHub repository (e.g., azure-connection-test.yml) with the following content:
name: Azure Login Test
on:
workflow_dispatch:
permissions:
id-token: write
contents: read # required for actions/checkout
jobs:
login-test:
runs-on: windows-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
-name: Azure Login via OIDC
uses: azure/login@v1
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
-name: Run Azure CLI command
run: az account show`
This workflow will trigger manually using workflow_dispatch, log into Azure using OIDC, and run the az account show command to verify the connection.
Step:6
To trigger the workflow in GitHub Actions, go to your GitHub repository, click on the Actions tab, find the workflow Azure Login Test, then click Run workflow and click the Run workflow button.
Step:7 After the workflow runs, go to the Actions tab in your GitHub repository, find the workflow run, and click on it to view the details. As shown in the below Image: ![enter image description here]
I am facing this issue; is there any update on this?
GitHub issue for that repository doesn't exist anymore and theres no snapshot in Wayback Machine, where can I find the issue answer?
The issue was resolved by updating the versions of the libraries to:
ansible==4.10.0
ansible-core==2.11.12
ansible-runner==2.4.0
These versions are not the latest because I need to maintain compatibility with CentOS 7 systems running Python 2.7.
With this configuration, the issue no longer occurs.
in the place of "," use "%2C" this solves the breaking of the url from the link in the android mobile
As @RbMm noticed, I call [esp], i.e. I tell the processor to execute instructions at the address at the top of the stack, but it is not the address of the beginning of the code, but the code itself. That was the problem, you just have to do call esp and Windows doesn't swear.
If you select StretchImage it will stretch to the size of the Image which could be larger than your form! I found that SizeMode ZOOM works best to fit the image to the size of your Picture Box control
You already made sure your model is consistent, as per your comment. Next thing you have to check is that your entities implements correctly hashCode() and equals(). In particular, as per Timefold documentation:
https://docs.timefold.ai/timefold-solver/latest/using-timefold-solver/modeling-planning-problems
Planning entity hashCode() implementations must remain constant. Therefore entity hashCode() must not depend on any planning variables. Pay special attention when using data structures with auto-generated hashCode() as entities, such as Kotlin data classes or Lombok’s @EqualsAndHashCode.
I struggled a lot with that until I read that statement in the documentation. I must say: not just hashCode(). Hope it helps.
@Youssef CH so how to fix the problem? I faced the same issue, deploy with github action with default script and no docker file in source code. Please help me
I´m having the same problem. I have a server that has azure devops 2022.0.2 installed. The cybersecurity team sent a nmap scan that shows a weak cipher "ssh_rsa", but no matter what I change in the ssh file the weak cipher still appears in the scan. I changed the ssh cinfig file as recomended by microsoft.
I don't write react, but based on my knownledge on Vue, I think you should do this with states instead of revising the DOM directly.
However, talking about the TS code provided above by @hritik-sharma
Let's update a bit:
// You can indicate type like this
const list = document.querySelectorAll<HTMLElement>('.m-list')
// Avoid using "any", especially when you actually know the type
function activeLink(item: HTMLElement) {
// Go through the list and remove the class
list.forEach((listItem) => listItem.classList.remove('active'))
// Add active class to the clicked item
item.classList.add('active')
}
// Apply the function to click event
// You actually do not need the parameter "(e: MouseEvent)" inside "addEventListener"
list.forEach((listItem) => listItem.addEventListener('click', () => activeLink(listItem)))
// each text is measured first and gets the space needed for it's content
// then the spacers are measured and divide up the space a close to the
// desired 1:3 or a 1:1:1 ratio as possible.
Row(modifier = Modifier.fillMaxWidth()) {
Spacer(modifier = Modifier.weight(1f))
Text(text = text1)
Spacer(modifier = Modifier.weight(1f))
Spacer(modifier = Modifier.weight(3f.takeIf{ text3 == null } ?: 1))
Text(text = text2)
Spacer(modifier = Modifier.weight(3f.takeIf{ text3 == null } ?: 1))
text3?.let {
Spacer(modifier = Modifier.weight(1f))
Text(text = text3)
Spacer(modifier = Modifier.weight(1f))
}
}
Trouble was in rewriting engine of Apache adding this to Dockerfile helped me!
RUN a2enmod rewrite
Could you please tell me how did you fixed this issue ? Im facing the same now
I must have made an error in my variables (?locationOfDiscovery / ?country) the following version of this code worked fine :
q_list <- c(df3$QID_items)
qid_list <- c(paste0("wd:",q_list, collapse = " "))
query_sparql <- paste0("SELECT
?item ?locationOfDiscovery
WHERE {
VALUES ?item {", qid_list,"}
OPTIONAL { ?item wdt:P189 ?locationOfDiscovery. }
SERVICE wikibase:label { bd:serviceParam wikibase:language 'en'. }
}")
Finally, I have re-done my code modifications manually.
Fortunately there were only 9 commits… :-)
Thanks to all and to each one for you help.
* You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH.
PM> dotnet tool install --global dotnet-ef
dotnet : Unhandled exception: System.Net.Http.HttpRequestException: Response status code does not indicate success: 401 (Unauthorized).
At line:1 char:1
+ dotnet tool install --global dotnet-ef
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (Unhandled excep...(Unauthorized).:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()
at NuGet.Protocol.HttpSource.<>c__DisplayClass15_0`1.<<GetAsync>b__0>d.MoveNext()
--- End of stack trace from previous location ---
at NuGet.Common.ConcurrencyUtilities.ExecuteWithFileLockedAsync[T](String filePath, Func`2 action, CancellationToken token)
at NuGet.Common.ConcurrencyUtilities.ExecuteWithFileLockedAsync[T](String filePath, Func`2 action, CancellationToken token)
at NuGet.Protocol.HttpSource.GetAsync[T](HttpSourceCachedRequest request, Func`2 processAsync, ILogger log, CancellationToken token)
at NuGet.Protocol.PackageMetadataResourceV3.LoadRegistrationIndexAsync(HttpSource httpSource, Uri registrationUri, String packageId, SourceCacheContext cacheContext, Func`2 processAsync, ILogger log,
CancellationToken token)
at NuGet.Protocol.PackageMetadataResourceV3.GetMetadataAsync(String packageId, Boolean includePrerelease, Boolean includeUnlisted, VersionRange range, SourceCacheContext sourceCacheContext, ILogger log,
CancellationToken token)
at NuGet.Protocol.PackageMetadataResourceV3.GetMetadataAsync(String packageId, Boolean includePrerelease, Boolean includeUnlisted, SourceCacheContext sourceCacheContext, ILogger log, CancellationToken token)
at Microsoft.DotNet.Cli.NuGetPackageDownloader.NuGetPackageDownloader.GetPackageMetadataAsync(PackageSource source, String packageIdentifier, Boolean includePrerelease, Boolean includeUnlisted, CancellationToken
cancellationToken)
at Microsoft.DotNet.Cli.NuGetPackageDownloader.NuGetPackageDownloader.GetMatchingVersionInternalAsync(String packageIdentifier, IEnumerable`1 packageSources, VersionRange versionRange, CancellationToken
cancellationToken)
at Microsoft.DotNet.Cli.NuGetPackageDownloader.NuGetPackageDownloader.GetBestPackageVersionAsync(PackageId packageId, VersionRange versionRange, PackageSourceLocation packageSourceLocation)
at Microsoft.DotNet.Cli.ToolPackage.ToolPackageDownloader.<>c__DisplayClass8_0.<InstallPackage>b__0()
at Microsoft.DotNet.Cli.TransactionalAction.Run[T](Func`1 action, Action commit, Action rollback)
at Microsoft.DotNet.Tools.Tool.Install.ToolInstallGlobalOrToolPathCommand.<>c__DisplayClass20_0.<Execute>b__1()
at Microsoft.DotNet.Tools.Tool.Install.ToolInstallGlobalOrToolPathCommand.RunWithHandlingInstallError(Action installAction)
at Microsoft.DotNet.Tools.Tool.Install.ToolInstallGlobalOrToolPathCommand.Execute()
at System.CommandLine.Invocation.InvocationPipeline.Invoke(ParseResult parseResult)
at Microsoft.DotNet.Cli.Program.ProcessArgs(String[] args, TimeSpan startupTime, ITelemetry telemetryClient)
PM>
How to solve this issue
did you get any solution to this i myself am trying to find it :)
SELECT pg_size_pretty(pg_database_size(current_database()));
As far as I know the IAR C-SPY uses semihosting debug interface for transferring data to PC. So there is a chance you might be able to read the data over the semihosting debug interface also by other client. There is a little more info at https://pyocd.io/docs/semihosting.html
There are also other debug interfaces like SWO ro Segger's RTT which you could use for transferring the data.
Some tools also allows to read any part of RAM or ROM over JTAG, at least Segger's J-Link with its utility J-Mem allows it.
It's generally better to paste a code snippet large enough for folks to be able to gain relevant context of your problem. Having said that, if your goal is to set buildConfig to true, you need to use = and I suspect your problem is there. What the compiler says with that error is that it treats the buildConfig and true as separate statements on a single line and it needs ; between them. In reality you should just do buildConfig = true and see if that helps
Since iOS 11, users can set notifications as "Persistent" in Settings > Notifications > [Your App]. You can’t override this programmatically.
Currently, Microsoft Graph API does not provide a direct way to create an event without sending invitations to the attendees. This is by design. The endpoint is designed to send invitations when an event is created and for consistency and user experience-the API provides users a consistent experience by notifying them to ensure they can manage their calendars effectively.
For this image
(the one with red left arrow) I used following approach,
<RelativeLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@id/check"
android:layout_alignParentStart="true"
android:background="@drawable/circle_white_black_border"
android:elevation="4dp">
<ImageView
android:id="@+id/prev_question"
android:layout_width="48dp"
android:layout_height="48dp"
android:layout_centerInParent="true"
android:contentDescription="@string/previous_question_text"
android:src="@drawable/ic_baseline_navigate_before"
android:tint="@color/app_title" />
</RelativeLayout>
Since elevation doesn't work well on image view, I added elevation to the relativelayout which worked good.
How can I optimize this further for better performance and lower memory allocation?
There are no obvious algorithmic improvements I can spot. So improvements will likely come down to microoptimizations and possibly parallelism.
A good first step would be to run the code in the built in visual studio profiler to see if you can spot anything unexpected. A second step would be to inspect the assembly code. Even if you know nothing about assembly, just checking the number of instructions should give you some idea about the runtime.
If you really want to micro optimize you might want to use a more detailed profiler like vTune.
Parallelism is easiest to exploit if you have different messages. But if you have a single large message composed of a very large number of key-value pairs you could probably just create evenly distributed indices, find the separator following each of these indices, and use that to split your string between threads.
Is there a faster or more efficient algorithm to extract key-value pairs from a FIX message?
Your algorithm looks linear to me, and that is the best complexity that can be had.
Are there alternative data structures or parsing techniques that would improve lookup speed while keeping allocations low?
Depends on your requirements, if you need a dictionary containing strings you will have allocations. If you have an upper bound of key sizes you might be able to use a plain array. Possibly containing Memory<char> to avoid string allocations.
Would SIMD, Span-based parsing, or other low-level optimizations be beneficial in this case?
The main use case for SIMD would be IndexOf. I'm not sure if this has been SIMD optimized or not in your version of .net. This should be fairly obvious if you inspect the assembly.
Are there any other approaches or data formats that could improve parsing and performance for high-throughput FIX messages?
You might be able to get better performance from C/C++, but in my experience that only helps if you have a very good understanding of both the language and the underlying hardware.
The readme in this repo has some useful links: https://github.com/fpoli/gicisky-tag
Hello @Alan are you available there I have some issues to built up a twilio conference functionality? is this something you can hep me around to figure out the issue?
Added the following to application.properties file & it worked: spring.cloud.gcp.bigquery.datasetName=<your_dataset_name>
Found the issue.. I inserted the y values into the final lorentzian function and plotted this in the excel sheet instead of inserting the x values. Plots look good now.
Is there a solution for the on screen keyboard on Windows Tablets?
Good solution!
I found this will occur this problem too. ⬇️
@Inject(REQUEST) private request: Request
pip install --upgrade pip setuptools wheel
This command worked for me before installing aiohttp
Notice how it is just a warning, so your app wont break, it's just to help you remove unused code.
In your app.component.ts, look for the "imports" field, you will find ProductTrackingComponent inside an area there, you can simply remove it and the warning should go away.
The warning just means you have the component inside "imports" but you haven't used it in the html/template yet.
I'm seeing similar issues. I have buttons in a gallery to navigate to the screen which weren't working. Click the buttons and nothing.
It coincided with copilot filtering when clicking the button, so assumed it was related.
I then tried using a Param to load the screen with an id passed, and it was freezing on the load screen.
Now reading this, I assume there is a bug somewhere in the screen on visible.
I found it! It turns out I wasn’t using the Docker ports (EXPOSE 8080, EXPOSE 8081) in the HttpRequest URI.
Kindly create two measures for Total Sales and profit margin then you can drag them to value column and drag your multiple dimensions to row of table visual to filter the data as per your requirement.
Thanks
Harish
I don't think Windows will allow you to execute code from memory regions because of DEP (Data Execution Prevention), try using VirtualAlloc or VirtualMemory or disable DEP (I don't recommend doing it on your PC), check https://learn.microsoft.com/en-us/windows/win32/memory/data-execution-prevention#programming-considerations
I have the same problem any help for solution? Do you even solve the problem?
where to add in ingress-nginx conf
chunked_transfer_encoding
I have the same issue too, Here is the solution that works for me:
1. Make sure that you install the 'Extension Pack for Java'
2. Click 'Java: Ready' on the bottom bar
3. Click 'Clean Workspace Cache'
this would trigger reindexing of the project, I guess, after that, Ctrl+Click should work.
Java: Clean Workspace Cache
String s = "I Love India";
StringBuilder sb = new StringBuilder(s); String s1 = sb.reverse().toString();
# Faylı yadda saxlamaq
pptx_path = "/mnt/data/Maliyye_Riyaziyyati_Təqdimat.pptx"
prs.save(pptx_path)
Its due to flutter Upgrade
Use flutter upgrade command in command prompt
It will be over OR still if the issue persists reinstall flutter
You need to add the serializer tag to specify the serializer that handles how data is serialized and deserialized to and from the database, such as: serializer:json/gob/unixtime. Since your field is of type JSON, your User field should be defined as:
User *UserInfo `json:"user" gorm:"serializer:json"`
This issue is caused by the connection issue between client and the iceServer.
From my case, I cannot access "iceServers: [stun:stun.l.google.com:19302]" before I set a proxy.
After the proxy is on, it works, the 701 error is gone.
What worked for me was navigating to the Overview section of my Logic App in Azure, clicking Restart, and then hitting Refresh. After that, the error was gone and I could see all of my workflows again.
image of where to restart and refresh logic app
mavenrc can override the java version supplied in zshrc, so you can try to fix that as well using the commands vim ~/.mavenrc and vim ~/.zshrc
@ConditionalOnExpression can be used in this way to achieve condition X or condition Y but not both. I tested this code with the same application.yaml provided.
@Bean
@ConditionalOnExpression("#{('${app.config.inside.prop}' == 'a' && '${app.config.outside.prop}' != 'a') || ('${app.config.inside.prop}' != 'a' && '${app.config.outside.prop}' == 'a')}")
public CustomBean customInsideOrOutsideNotBoth() {
System.out.println("app.config.inside _or_ app.config.outside _not_ both");
return new CustomBean();
}
As of today, unfortunately, AWS Bedrock Agent does not support response streaming.
You can stream the response when invoking any Bedrock model but not agent.
Ref: https://repost.aws/questions/QUsqK6QQwoQ-a-crTHpkjfEA/aws-bedrock-agents-support-streaming-responses
Thank you so much Brett Donald! You have worked out what it was.
I was so excited when it started loading something other than my blank text site. However (as explained in comment to Brett's answer) I am now getting an issue of the website being stuck in a continual loading loop.
Since all the information about my situation is in this thread, rather than creating a new thread I thought I would just ask for the solution here.
Maybe the problem will be solved if I continue the conversion to PHP? I'm not sure as I am still very much a beginner so I'm still learning the basics.
Maybe the problem won't be solved by continuing to convert my HTML template. In which case what do I do? I looked up this problem on a search engine and found a reddit post where someone suggested pressing f12 to open the console then under the network tab make sure the "persist" setting was enabled. However I can't find that exact setting. I found something called persistance and I checked the option under that but it still didn't fix the issue.
i think the answer is No ... but when you ask ... and the persons answering are well versed in that topic ... they will never admit ... that feature is not present.
I was able to troubleshoot it out. It was caused by caching issues and the node version I was using. After downgrading to node.js v18.20.8, I had to delete my .vite, .nuxt, and node_modules folders, as well as my package-lock.json just to be safe. Then I ran $npm install to reinstall all packages and dependencies, and the issue was resolved.
This error occurs because Dremio caches query results and logs metadata by default when executing queries via API. If too many queries are fired in succession:
Cache & Logs Fill Up Storage
Dremio stores query results in its distributed cache (accelerations) and maintains internal indexes (IndexWriter) for metadata.
If storage (disk/memory) gets full, the IndexWriter fails with this error, and metadata (databases/views) may disappear.
APIs Stop Responding
Clear Cache Manually (Temporary Fix) - Restart Dremio to force cache cleanup.
Adjust Cache Settings (Permanent Fix) - Reduce cache expiration time
Increase Storage Monitoring
Clear Cache Manually (Temporary Fix) - Restart Dremio to force cache cleanup.
Adjust Cache Settings (Permanent Fix) - Reduce cache expiration time
Increase Storage Monitoring
<script type="text/javascript"> (function(d) { var f = d.getElementsByTagName('SCRIPT')[0], p = d.createElement('SCRIPT'); p.type = 'text/javascript'; p.async = true; p.src = '//assets.pinterest.com/js/pinit.js'; f.parentNode.insertBefore(p, f); })(document); </script>
This sharing for newbie to get started on the right direction.
Install dependencies
npm install --save \
@fullcalendar/rrule \
@fullcalendar/core \
@fullcalendar/daygrid
Example implementation
import { Calendar } from '@fullcalendar/core'
import rrulePlugin from '@fullcalendar/rrule'
import dayGridPlugin from '@fullcalendar/daygrid'
let calendarEl = document.getElementById('calendar')
let calendar = new Calendar(calendarEl, {
plugins: [ rrulePlugin, dayGridPlugin ],
events: [
{
title: "Afternoon for 3 days",
startTime: "12:00",
endTime: "16:00",
rrule: {
freq: "daily", // Every day
count: 3, // Repeat 3 times
dtstart: "2025-04-01", // Start from 2025-04-01
interval: 1
}
},
{
title: "Weekly Morning - Mon to Fri",
startTime: "12:00",
endTime: "16:00",
rrule: {
freq: "weekly", // Every week
byweekday: [0,1,2,3,4], // Mon - Fri
dtstart: "2025-04-13" // Start from 2025-04-13
},
exdate: ["2025-04-15"] // Exclude 2025-04-15
}
]
})
calendar.render()
The GetMessageContent graphClient.Users[_userId].Messages[eventId].Content.GetAsync() endpoint of Graph API seems to work with an EventId. You could use it to get the Stream content and then convert that into string to get the ICS.
I run pivpn -d command and got that exists two issues. After fix them everything works propertly.
I will update this answer later, but I think VSCode Snippets could be used for this purpose
Do you resolve your problem? I meet the same problem.
use git clone --bare, for example git clone --bare https://github.com/git/git.git
It would be easiest if you could post all of your code, especially since when it comes to key mappings in video game creation on Python you really need to read between the lines to find issues. I took a quick look but that doesn't these are the only issues you might be facing currently.
However, from looking at your code I can tell you this:
1. The reason there is no animation when you jump while not moving is because your self.action will always end up being 'idle'
Firstly, we look at the key mapping function you have called handle_event(). We see that the 'A' and 'D' keys probably wouldn't affect you in this case.
Next we look at your jump. In this case, there don't seem to be any issues.
Finally, we look at your last condition. if self.moving == False:, then self.action = 'idle'. But we aren't moving, are we? Because you aren't moving, your self.moving is False. And because it is False, that means the action will end up being your idle animation.
Let's go over how Pygame frames work. In each "frame," of the game, or while loop, it runs the functions it needs to run, and updates to the next frame. Because of this, your game will only update to the next frame after it has finished running your function. That's why you ultimately get 'idle' as your animation state. It's the last thing in the function before the next frame update, and the conditions are right.
2. You see animation when you jump while moving that repeats indefinitely because you don't properly set your action.
if self.grounded:, then self.action != 'jump'. But in Python, the != is a comparison operator, not an assignment operator. That means it tells you if something is True or not, but doesn't do anything. In fact, there is no way to set a variable to anything but one value as you have tried to do here. You would use != the same way you would use == in an if statement. It basically means "not equal" as in "if not equal".Now I'm going to attempt to "fix" your code. This answer is getting pretty long so I'll just add comments in your code instead of explaining everything. I don't know why but Stack Overflow set my codeblock to css and I don't know how to change it so the colors will be off. This code is to replace your `handle_event` function.
'''In your player object, create a new variable called self.jump_frame = 0'''
# New variable to change the states of the player object. Makes things more organized.
def set_action(self, action=None, direction=None, velx=None, vely=None): #Making variables default to None if not given when set_action() is called
self.action = action if action != None else self.action
self.direction = direction if direction != None else self.direction #This is basically an if statement crammed into 1 line. Feel free to expand it if you want. It basically reads "set self.direction to the variable direction unless direction is None, in which case set it to itself (so that it doesn't change)"
self.vel.x = velx if velx != None else self.vel.x #Same concept
self.vel.y = vely if vely != None else self.vel.y
def handle_event(self, event):
if event.type == pg.KEYDOWN:
if event.key == pg.K_d:
self.set_action(action='run', direction='right', velx=self.speed) #Call our new function
self.moving = True
elif event.key == pg.K_a:
self.set_action(action='run', direction='left', velx=-self.speed)
self.moving = True
if event.key == pg.K_w:
self.jumping = True
if self.jumping == True:
if self.jump_frame < 5: #You can adjust this and change this portion of the code depending on how you want your jump to work or to add more realistic jump momentum
self.set_action(action='jump', vely=10) #We jump for 5 frames, stop for the 5th frame, and then fall until we have hit the floor
elif self.jump_frame == 5:
self.set_action(action='jump', velv=0)
else:
self.set_action(action='jump', vely=-10)
self.jump_frame += 1
if self.grounded:
self.set_action(vely=0)
self.jumping = False
self.jump_frame = 0
elif event.type == pg.KEYUP:
if event.key == pg.K_d:
self.moving = False
if self.jumping == False:
self.set_action(action='idle', direction='right')
elif event.key == pg.K_a:
self.moving = False
if self.jumping == False:
self.set_action(action='idle', direction='left')
if self.moving == False and self.jumping == False: #Make sure that you are DEFINITELY not moving. Jumping is moving too, after all.
self.set_action(action='idle', velx=0)
As for your last question about downvotes, well let's just say that when people who are used to this platform see a response that doesn't follow the standard guidelines for questions (sometimes even if it's a newcomer), they will downvote the question. When downvoted, you lose reputation points. If you get enough downvotes on your question, your question will be automatically closed (or at least I believe so). You can check out the link Julien posted in the comment to check out the guidelines for question asking.
Try using an exclamation mark after the null. It's called a forgiving operator. Plenty web pages that explain it. The keywords are "exclamation mark C# null" in the appropriate search engine.
I am using the same API call but every time I get 202. How did you managed to get the response?
It’s “enqueue”, not “enque” ... check your spelling.
try setting env var PIP_USER_AGENT_USER_DATA
Are you using a() on other imports? For example in in past.py, you might be using function inside function that might lead to infinite loop
def past():
print("Inside past()")
a1() # This might trigger an infinite loop
If past.py imports mine.py or mine.py imports past.py you might be triggering something unwanted
I’m facing the exact same issue, but the percentage of failures in my case is much higher.
I’m using the Python client to upload/download/delete files from Google Drive, and I’m using service account credentials. It was working fine 4 days ago, but before 4 days I start receiving the connection timeout errors.
Have you found any solutions yet?
from graphviz import Digraph
# Criar o fluxograma da malha fechada de controle
dot = Digraph(format='png')
# Nós do sistema
dot.node('SP', 'Setpoint', shape='parallelogram', style='filled', fillcolor='lightblue')
dot.node('C', 'Controlador', shape='box', style='filled', fillcolor='lightgray')
dot.node('EFC', 'Elemento Final de Controle', shape='box', style='filled', fillcolor='lightgray')
dot.node('P', 'Processo', shape='ellipse', style='filled', fillcolor='lightgreen')
dot.node('S', 'Sensor/Transdutor', shape='parallelogram', style='filled', fillcolor='lightblue')
# Conexões do fluxo de controle
dot.edge('SP', 'C', label='Erro')
dot.edge('C', 'EFC', label='Sinal de Controle')
dot.edge('EFC', 'P', label='Ação no Processo')
dot.edge('P', 'S', label='Medição')
dot.edge('S', 'C', label='Feedback')
# Salvar e visualizar a imagem
fluxo_path = "/mnt/data/malha_fechada.png"
dot.render(fluxo_path, format="png", cleanup=True)
fluxo_path
Admin user be checked, and then you can use azure container registry using docker login. I think this is mandatory when using simple docker login to pull image.
I have just test using Managed identity, and have the same role as yours, and works well.
where is your gorm tag?read more about gorm docs please
For me, I was opening the .xcworkspace file. After I moved over to the.xcodeproj file and ran it, the messages started to show again.
That's a feature request on GitHub, see here Copy the .dSYM to the publish directory #15384 Open. You may follow up this feature request and get the latest updates.
The workaround now is to manually copy the .dsym file to the publish directory (bin\Release\net8.0-ios\ios-arm64\publish folder) since the .dsym file has already generated. This GitHub issue may also relate, Is it possible to embed DSYM (folder) in my IPA through Devops build?
For more info, please refer to Publish an iOS app using the command line.
### **6. إذا كنت تريد طريقة أبسط (بدون APIs)**
جرّب استخدام **مكتبة BeautifulSoup** لاستخراج الصور من صفحات الويب مباشرةً (لكن قد ينتهك شروط استخدام بعض المواقع):
```python
import requests
from bs4 import BeautifulSoup
url = "https://example.com/products"
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
# استخراج جميع صور المنتجات
product_images = []
for img in soup.find_all('img'):
product_images.append(img\['src'\])
print(product_images)
```
---
### **ملاحظة أخيرة**:
إذا لم تنجح أي من الحلول أعلاه، شارك معي:
1. **الخطوات التي اتبعتها بالضبط**.
2. **نص الخطأ الذي يظهر لك** (صورة أو نصًا).
3. **مثال عن الكود الذي كتبته**.
سأحاول مساعدتك في تحديد المشكلة بدقة! 🔍
I have the same problemit would be grate if somebody help us)
Mike M, successfully answered my question and debugged the current state of my project.
ParseErrors are in XML files, like layouts and the AndroidManifest.xml. If it doesn't have the specific file listed in the error message, and you can't find it yourself easily by looking through each one, you might be able to catch it with code inspection: Main Menu > Code > Inspect Code…
Since I am using MVC .Net Core 9.0.3 I added the following code in my Program.cs file.
After that, no issue loading that specific file. I probably hit somehow the Form value count limit 1024 .
services.Configure<FormOptions>(options =>
{
options.ValueCountLimit = int.MaxValue;
});
try to run
yum install libffi-devel
before calling rvm
You need install the library before installing ruby
Not sure, but this might work. If you render it with conditionally by using
const { isLoaded } = useAuth();
Make use of isLoaded to stop render the component eg:
if (!isLoaded) {
// Prevent rendering until auth state is loaded
return null;
}
I've made some minor updates to my code, and it seems to be working now. However, I wanted to ask if there's a better way to handle the login configuration for AWSCognitoCredentialsProvider.
Here’s my updated loginCognito function:
private func loginCognito(data: AWSConfigData) {
let loginsKey = "cognito-idp.\(data.region).amazonaws.com/\(data.userPoolId)"
let logins = [loginsKey: data.idToken]
self.awsConfigData = data
printAWS(addition: "Login with Token:", message: "\(data.idToken)")
// If defaultServiceConfiguration is already set, update loginMaps
if let existingCredentialsProvider = AWSServiceManager.default()?.defaultServiceConfiguration?.credentialsProvider as? AWSCognitoCredentialsProvider,
let existingIdentityProviderManager = existingCredentialsProvider.identityProvider.identityProviderManager as? IdentityProviderManager {
existingIdentityProviderManager.setLogins(loginMaps: logins)
return
}
let identityProviderManager = IdentityProviderManager(loginMaps: logins)
let credentialsProvider = AWSCognitoCredentialsProvider(
regionType: data.regionType,
identityPoolId: data.identityPoolId,
identityProviderManager: identityProviderManager
)
// Set AWS default configuration
let configuration = AWSServiceConfiguration(
region: data.regionType,
credentialsProvider: credentialsProvider
)
AWSServiceManager.default()?.defaultServiceConfiguration = configuration
}
And my IdentityProviderManager:
private class IdentityProviderManager: NSObject, AWSIdentityProviderManager {
private var loginMaps: [String: String]
init(loginMaps: [String: String]) {
self.loginMaps = loginMaps
}
func setLogins(loginMaps: [String: String]) {
self.loginMaps = loginMaps
}
func logins() -> AWSTask<NSDictionary> {
return AWSTask(result: loginMaps as NSDictionary)
}
}
Is there a better approach to updating logins dynamically in AWSCognitoCredentialsProvider?
Would it be more efficient to manage this differently, or is this the best practice?
Thanks in advance!
The problem should be shortcut conflicts.
After I've disable autokey and press ctrl alt 0, the bash show's (arg: 0).
I've search the Internet, this means that the readline mode was triggered.
The autokey can't work well due to this shortcut conflict.
After testing, I used .net 9.0 Maui to display the image on Android 15 with the following code, and there was no rotation problem.
You can refer to the following code snippet.
try
{
var photo = await MediaPicker.CapturePhotoAsync();
if (photo != null)
{
var newFile = Path.Combine(FileSystem.CacheDirectory, photo.FileName);
using (var stream = await photo.OpenReadAsync())
using (var newStream = File.OpenWrite(newFile))
await stream.CopyToAsync(newStream);
profileImage.Source = newFile;
}
}
catch (Exception ex)
{
Console.WriteLine($"CapturePhotoAsync THREW: {ex.Message}");
}
Best Regards,
Alec Liu.
If the answer is the right solution, please click "Accept Answer" and kindly upvote it. If you have extra questions about this answer, please click "Comment".
Note: Please follow the steps in our documentation to enable e-mail notifications if you want to receive the related email notification for this thread.
I met the same issue, after digging around for hours, nothing.
And when I let another guys try the link https://xx.xx.xx.xx:8443/, it works in their Chrome browser, the latest version. I use a old version of Chrome(131.0.6778.86-arm64).
Deleting the contents of the .vs folder in the solution fixed it for me. Did not delete the folder itself, just its contents. Cheers!
Om bale seembala ona telare chisamba pow
I asked the Angular Team about this. You can find their response here: https://github.com/angular/angular-cli/issues/29967.
This behavior is expected because the page is being prerendered when running ng build && yarn serve:ssr:dummyapp. To disable prerendering, set "prerender": false in your angular.json.
Just facing the same problem, so I solved to use the method with 'async' and 'await' into 'onBeforeMount'.
<script setup>
import { ref, computed, onBeforeMount } from 'vue';
const valueThatIsFilledInBeforeMount = ref('')
const deffer = async () => {
valueThatIsFilledInBeforeMount.value = 1;
};
onBeforeMount(() => {
console.log("onBeforeMount")
await deffer()
});
const myComputedProperty = computed(() => {
return valueThatIsFilledInBeforeMount.value + 1;
})
</script>
@SupportedOptions("my.arg")
public class MyProcessor extends AbstractProcessor {
then in maven-compiler-plugin:
<compilerArgs>
<arg>-Amy.arg=yourvalue</arg>
</compilerArgs>